1 Introduction

A metric graph is a combinatorial graph where the edges are considered as intervals of the real line with a distance on each one of them and are glued together according to the combinatorial structure. The resulting metric measure space allows to introduce a family of differential operators acting on each edge \(\textbf{e}\) considered as an interval \((0, \ell _\textbf{e})\) with boundary conditions at the vertices. We refer to the pair formed by the metric graph and the family of differential operators as quantum graph. During the last two decades, quantum graphs became an extremely popular subject because of numerous applications in mathematical physics, chemistry and engineering. Indeed, the literature on quantum graphs is vast and extensive and there is no chance to give even a brief overview of the subject here. We only mention a few recent monographs and collected works with a comprehensive bibliography [6, 7, 23, 25, 31, 40, 43].

The historical motivation of the Cheeger cut problem is an isoperimetric-type inequality that was first proved by J. Cheeger in [16] in the context of compact, n-dimensional Riemannian manifolds without boundary. As a consequence, one obtains the validity of a Poincaré inequality with optimal constant uniformly bounded from below by a geometric constant. Let \(\lambda _1(M)\) be the least non-zero eigenvalue of the Laplace-Beltrami operator on M, then Cheeger proved that

$$\begin{aligned} \lambda _1(M)\ge \frac{1}{2} h(M)^2, \quad h(M):= \inf _{A \subset M} \frac{P(A)}{ \min \{ V(A), V(M \setminus A) \}} \end{aligned}$$
(1.1)

where V(A) and P(A) denote, respectively, the Riemannian volume and perimeter of A.

The first Cheeger estimates on discrete graphs are due to Dodziuk [21] and Alon and Milmann [1]. Since then, these estimates have been improved and various variants have been proved. Consider a finite weighted connected graph \(G =(V, E)\), where \(V = \{x_1, \ldots , x_n \}\) is the set of vertices (or nodes) and E the set of edges, which are weighted by a function \(w_{ji}= w_{ij} \ge 0\), \((x_i,x_j) \in E\). In this context, the Cheeger cut value of a partition \(\{ S, S^c\}\) (\(S^c:= V \setminus S\)) of V is defined as

$$\begin{aligned} \mathcal {C}(S):= \frac{\mathrm{Cut}(S,S^c)}{\min \{\mathrm{vol}(S), \mathrm{vol}(S^c)\}}, \end{aligned}$$

where \(\mathrm{Cut}(A,B) = \sum _{x_i \in A, x_j \in B} w_{ij}\) and \(\mathrm{vol}(S)\) is the volume of S, defined as \(\mathrm{vol}(S):= \sum _{x_i \in S} d_{x_i}\), being \(d_{x_i}:= \sum _{j=1}^n w_{x_i,x_j}\) the weight at the vertex \(x_i\). Then,

$$\begin{aligned} h(G) := \min _{S \subset V} \mathcal {C}(S) \end{aligned}$$
(1.2)

is called the Cheeger constant, and a partition \(\{ S, S^c\}\) of V is called a Cheeger cut of G if \(h(G)=\mathcal {C}(S)\). Unfortunately, the Cheeger minimization problem of computing h(G) is NP-hard [27, 46]. However, it turns out that h(G) can be approximated by the first positive eigenvalue \(\lambda _1\) of the graph Laplacian thanks to the following Cheeger inequality [17]:

$$\begin{aligned} \frac{\lambda _1}{2} \le h(G) \le \sqrt{2\lambda _1}. \end{aligned}$$

This motivates the spectral clustering method [33], which, in its simplest form, thresholds the least non-zero eigenvalue of the graph Laplacian to get an approximation to the Cheeger constant and, moreover, to a Cheeger cut. In order to achieve a better approximation than the one provided by the classical spectral clustering method, a spectral clustering based on the graph p-Laplacian was developed in [10], where it is showed that the second eigenvalue of the graph p-Laplacian tends to the Cheeger constant h(G) as \(p \rightarrow 1^+\). In [46] the idea was further developed by directly considering the variational characterization of the Cheeger constant h(G)

$$\begin{aligned} h(G) = \min _{u \in L^1} \frac{ \vert u \vert _{TV}}{\Vert u - \mathrm{median}(u)) \Vert _1}, \end{aligned}$$
(1.3)

where

$$\begin{aligned} \vert u \vert _{TV} := \frac{1}{2} \sum _{i,j=1}^n w_{ij} \vert u(x_i) - u(x_j) \vert . \end{aligned}$$

In [46], it was proved that the solution of the variational problem (1.3) provides an exact solution of the Cheeger cut problem. If a global minimizer u of (1.3) can be computed, then it can be shown that this minimizer would be the indicator function of a set \(\Omega \) (i.e. \(u = {\chi }_\Omega \)) corresponding to a solution of the NP-hard problem (1.2).

The subdifferential of the energy functional \(\vert \cdot \vert _{TV}\) is minus the 1-Laplacian in graphs. Using the nonlinear eigenvalue problem \(\lambda \, \mathrm{sign}(u) \in -\Delta _1 u\), the theory of 1-Spectral Clustering is developed in [13,14,15, 27]. For a generalization of the above results to the framework of random walk spaces see [37, 38].

The only results aboutthe Cheeger cut problem in metric graphs that we know are the ones given by Del Pezzo and Rossi [19] in which they study the first nonzero eigenvalue of the p-Laplacian on a quantum graph with Kirchoff boundary conditions on the vertices and study the Cheeger cut problem, taking the limit as \(p \rightarrow 1\) of the eigenfunctions. Now, as we will see later, their concept of total variation of a function of bounded variation in a metric graph is not clear and, consequently, also their concept of perimeter (see Remark 2.11). Here we use a different concept of total variation for functions in metric graph, proposed in [34], and consequently of perimeter, that takes into account the jumps of the function at the vertices.

Following the work by Nicaise [41], where a Cheeger inequality in metric graphs is obtained, there have been very few results in this direction (see [29, 30, 43]).

On the other hand, the Cheeger paper [16] also motivated the so-called Cheeger problem. Given a bounded domain \(\Omega \subset \mathbb R^N\), the Cheeger constant of \(\Omega \) is defined as

$$\begin{aligned} h_1(\Omega ) := \inf \left\{ \frac{\text{ Per }(E)}{\vert E\vert } \ : \ E \subset \Omega , \ E \ \hbox { with finite perimeter,} \ \vert E \vert > 0 \right\} , \end{aligned}$$

where \(\text{ Per }(E)\) is the perimeter of E and \(\vert E\vert \) its Lebesgue measure. Any set \(E \subset \Omega \) such that

$$\begin{aligned} \frac{\text{ Per }(E )}{\vert E \vert } = h_1(\Omega ), \end{aligned}$$

is called a Cheeger set of \(\Omega \). Furthermore, we say that \(\Omega \) is calibrable if it is a Cheeger set of itself, that is, if

$$\begin{aligned} \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } = h_1(\Omega ). \end{aligned}$$

We shall generically refer to the Cheeger problem, as far as the computation or estimation of \(h_1(\Omega )\), or the characterization of Cheeger sets of \(\Omega \), are concerned. In the last year there has been a lot of literature on the Cheeger problem, see [32, 42] for surveys about the Cheeger problem and [35, 36] for the nonlocal Cheeger problem.

It is well known that the Cheeger constant of \(\Omega \) is the limit of the sequence of first eigenvalues of the p-Laplacian (with Dirichlet conditions) when p tends to 1, see [28]. A similar result has been obtained by Del Pezzo and Rossi in [20], in the context of metric graphs, but here again the problem is their concept of perimeter in metric graphs. For the Cheeger problem in random walk spaces, that has as a particular case the weighted graphs, see [38].

The aim of this paper is to study the Cheeger cut and Cheeger problem in metric graphs. We introduce the concepts of Cheeger and calibrable sets in metric graphs and we also study the eigenvalue problem whereby we give a method to solve the optimal Cheeger cut problem. To do that we work in the framework we developed in [34] to study the total variation flow in metric graphs.

The structure of the paper is as follows. In Sect. 2 we recall the notion of metric graphs and the results about functions of bounded variation in metric graphs that we need. Then, in Sect. 3 we study the Cheeger problem. We introduce the concepts of Cheeger and calibrable sets in metric graphs, we give different characterizations of the Cheeger constant of a set and its relation with the Max-Flow Min-Cut Theorem and, moreover, we characterize the calibrable sets. Section 3 is also devoted to the eigenvalue problem for the 1-Laplacian in metric graphs and its relations with the Cheeger problem. In Sect. 4 we study the Cheeger cut in metric graphs. We obtain a characterization similar to the one obtained in [46] for weighted graphs, which allows us to prove the existence of an optimal Cheeger cut, and its relation with the eigenvalue problem for the 1-Laplacian obtaining similar results to the ones in [13] for weighted graphs, whereby we give a method to solve the optimal Cheeger cut problem. Finally, we also obtained a Cheeger Inequality in metric graphs.

2 Preliminaries

In this section, after giving the basic concepts of metric graphs, we recall the results about total variation functions introduced in [34] that is the framework in which we developed our work.

2.1 Metric graphs

We recall here some basic knowledge about metric graphs, see for instance [7] and the references therein.

A graph \(\Gamma \) consists of a finite or countably infinite set of vertices \(\mathrm {V}(\Gamma )=\{\mathrm {v}_i\}\) and a set of edges \(\mathrm {E}(\Gamma )=\{\textbf{e}_j\}\) connecting the vertices. A graph \(\Gamma \) is said to be a finite graph if the number of edges and the number of vertices are finite. An edge and a vertex on that edge are called incident. We will denote \(\mathrm {v}\in \textbf{e}\) when the edge \(\textbf{e}\) and the vertex \(\mathrm {v}\) are incident. We define \(\mathrm {E}_{\mathrm {v}}(\Gamma )\) as the set of all edges incident to \(\mathrm {v}\), and the degree of \(\mathrm {v}\) as \(d_\mathrm {v}:= \sharp \mathrm {E}_{\mathrm {v}}(\Gamma )\). We define the boundary of \(V(\Gamma )\) as

$$\begin{aligned} \partial V(\Gamma ):= \{ \mathrm {v}\in V(\Gamma ) \ : \ d_\mathrm {v}=1 \}, \end{aligned}$$

and its interior as

$$\begin{aligned} \mathrm{int}( V(\Gamma )) := \{ \mathrm {v}\in V(\Gamma ) \ : \ d_\mathrm {v}> 1 \}. \end{aligned}$$

We will assume the absence of loops, since if these are present, one can break them into pieces by introducing new intermediate vertices. We also assume the absence of multiple paralled edges.

A walk is a sequence of edges \(\{\textbf{e}_1,\textbf{e}_2,\textbf{e}_3,\dots \}\) in which, for each i (except the last), the end of \(\textbf{e}_i\) is the beginning of \(\textbf{e}_{i+1}\). A trail is a walk in which no edge is repeated. A path is a trail in which no vertex is repeated.

From now on we will deal with a connected, compact and metric graph \(\Gamma \):

  • A graph \(\Gamma \) is a metric graph if

    1. (1)

      each edge \(\textbf{e}\) is assigned with a positive length \(\ell _{\textbf{e}}\in (0,+\infty ];\)

    2. (2)

      for each edge \(\textbf{e}\), a coordinate is assigned to each point of it, including its vertices. For that purpose, each edge \(\textbf{e}\) is identified with an ordered pair \((\mathrm {i}_{\textbf{e}},\mathrm {f}_{\textbf{e}})\) of vertices, being \(\mathrm {i}_{\textbf{e}}\) and \(\mathrm {f}_{\textbf{e}}\) the initial and terminal vertex of \(\textbf{e}\) respectively, which has no sense of meaning when travelling along the path but allows us to define coordinates by means of an increasing function

      $$\begin{aligned} \begin{array}{rlcc} c_\textbf{e}:&{}\textbf{e}&{}\rightarrow &{} [0,\ell _\textbf{e}]\\ &{}x&{}\rightsquigarrow &{} x_{\textbf{e}} \end{array} \end{aligned}$$

      such that, letting \(c_\textbf{e}(\mathrm {i}_\textbf{e}):=0\) and \(c_\textbf{e}(\mathrm {f}_\textbf{e}):=\ell _{\textbf{e}}\), it is exhaustive; \(x_{\textbf{e}}\) is called the coordinate of the point \(x\in \textbf{e}\).

  • A graph is said to be connected if a path exists between every pair of vertices, that is, a graph which is connected in the usual topological sense.

  • A compact metric graph is a finite metric graph whose edges all have finite length.

If a sequence of edges \(\{\textbf{e}_j\}_{j=1}^n\) forms a path, its length is defined as \(\sum _{j=1}^n\ell _{\textbf{e}_j}.\) The length of a metric graph, denoted \(\ell (\Gamma )\), is the sum of the length of all its edges. Sometimes we identify \(\Gamma \) with

$$\begin{aligned} \Gamma \equiv \bigcup _{\textbf{e}\in E(\Gamma )} \textbf{e}. \end{aligned}$$

Given a set \(A \subset \Gamma \), we define its length as

$$\begin{aligned} \ell (A):= \sum _{ \textbf{e}\in E(\Gamma ), A \cap \textbf{e}\not =\emptyset } \vert c_{\textbf{e}}(A \cap \textbf{e}) \vert , \end{aligned}$$

being \(\vert \cdot \vert \) the one dimensional Lebesgue measure.

For two vertices \(\mathrm {v}\) and \(\hat{\mathrm {v}},\) the distance between \(\mathrm {v}\) and \(\hat{\mathrm {v}}\), \(d_\Gamma (\mathrm {v},\hat{\mathrm {v}})\), is defined as the minimal length of the paths connecting them. Let us be more precise and consider x, y two points in the graph \(\Gamma \).

-if \(x,y\in \textbf{e}\) (they belong to the same edge, note that they can be vertices), we define the distance-in-the-path-\(\textbf{e}\) between x and y as

$$\begin{aligned} \hbox {dist}_{\textbf{e}}(x,y):=|y_\textbf{e}-x_\textbf{e}|; \end{aligned}$$

-if \(x\in \textbf{e}_a\), \(y\in \textbf{e}_b\), with \(\textbf{e}_a\) and \(\textbf{e}_b\) different edges, let \(P=\{\textbf{e}_a,\textbf{e}_1,\dots ,\textbf{e}_{n},\textbf{e}_b\}\) be a path (\(n\ge 0\)) connecting them. Let us call \(\textbf{e}_{0} = \textbf{e}_a\) and \(\textbf{e}_{n+1}= \textbf{e}_b\). Following the definition given above for a path, set \(\mathrm {v}_{0}\) the vertex that is the end of \(\textbf{e}_0\) and the beginning of \(\textbf{e}_{1}\) (note that these vertices need not be the terminal and the initial vertices of the edges that are taken into account), and \(\mathrm {v}_{n}\) the vertex that is the end of \(\textbf{e}_n\) and the beginning of \(\textbf{e}_{n+1}\). We will say that the distance-in-the-path-P between x and y is equal to

$$\begin{aligned} \hbox {dist}_{\textbf{e}_0}(x,\mathrm {v}_0)+ \sum _{1\le j\le n}\ell _{\textbf{e}_j}+ \hbox {dist}_{\textbf{e}_{n+1}}(\mathrm {v}_n,y). \end{aligned}$$

We define the distance between x and y, that we will denote by \(d_\Gamma (x,y)\), as the infimum of all the distances-in-paths between x and y, that is,

$$\begin{aligned} \begin{array}{lr} d_\Gamma (x,y)&{}= \inf \Big \{ \hbox {dist}_{\textbf{e}_0}(x,\mathrm {v}_0)+ \sum _{1\le j\le n}\ell _{\textbf{e}_j}+ \hbox {dist}_{\textbf{e}_{n+1}}(\mathrm {v}_n,y):\qquad \qquad \\ &{} \quad \{\textbf{e}_0,\textbf{e}_1,\dots ,\textbf{e}_{n},\textbf{e}_{n+1}\} \hbox { path connecting} x \hbox { and} y \Big \}. \end{array} \end{aligned}$$

We remark that the distance between two points x and y belonging to the same edge \(\textbf{e}\) can be strictly smaller than \(|y_\textbf{e}-x_\textbf{e}|\). This happens when there is a path connecting them (using more edges than \(\textbf{e}\)) with length smaller than \(|y_\textbf{e}-x_\textbf{e}|\).

A function u on a metric graph \(\Gamma \) is a collection of functions \([u]_{\textbf{e}}\) defined on \((0,\ell _{\textbf{e}})\) for all \(\textbf{e}\in \mathrm {E}(\Gamma ),\) not just at the vertices as in discrete models.

Throughout this work, \( \int _{\Gamma } u(x) dx\) or \( \int _{\Gamma } u\) denotes \( \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \int _{0}^{\ell _{\textbf{e}}} [u]_{\textbf{e}}(x_\textbf{e})\, dx_\textbf{e}\). Note that given \(\Omega \subset \Gamma \), we have

$$\begin{aligned} \ell (\Omega ) = \int _\Gamma {\chi }_\Omega dx. \end{aligned}$$

Let \(1\le p\le +\infty .\) We say that u belongs to \(L^p(\Gamma )\) if \([u]_{\textbf{e}}\) belongs to \(L^p(0,\ell _{\textbf{e}})\) for all \(\textbf{e}\in \mathrm {E}(\Gamma )\) and

$$\begin{aligned} \Vert u\Vert _{L^{p} (\Gamma )}^p:=\sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \Vert [u]_{\textbf{e}}\Vert _{L^{p}(0,\ell _{\textbf{e}})}^p<+\infty . \end{aligned}$$

The Sobolev space \(W^{1,p}(\Gamma )\) is defined as the space of functions u on \(\Gamma \) such that \([u]_{\textbf{e}}\in W^{1,p}(0,\ell _{\textbf{e}})\) for all \(\textbf{e}\in \mathrm {E}(\Gamma )\) and

$$\begin{aligned} \Vert u\Vert _{W^{1,p}(\Gamma )}^p:=\sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \Vert [u]_{\textbf{e}}\Vert _{L ^p(0,\ell _{\textbf{e}})}^p+\Vert [u]_{\textbf{e}}^\prime \Vert _{L ^p(0,\ell _{\textbf{e}})}^p<+\infty . \end{aligned}$$

The space \(W^{1,p}(\Gamma )\) is a Banach space for \(1 \le p \le \infty \). It is reflexive for \(1< p < \infty \) and separable for \(1 \le p < \infty .\) Observe that in the definition of \(W^{1,p}(\Gamma )\) we do not assume the continuity at the vertices. Let us point out that the above spaces are unaffected by the orientation of the edges.

A quantum graph is a metric graph \(\Gamma \) equipped with a differential operator acting on the edges together with vertex conditions. In this work, we will consider the \(1-\)Laplacian differential operator given formally by

$$\begin{aligned} \Delta _1 u(x):= \left( \frac{ u^{\prime }(x)}{|u^{\prime }(x)|}\right) ^{\prime }, \end{aligned}$$

on each edge.

From now on we will assume that \(\Gamma \) is a finite, compact and connected metric graph.

2.2 Convex functions and subdifferentials

Let H be a real Hilbert space with scalar product \(\langle \cdot , \cdot \rangle _H\) and nor \(\Vert u \Vert _H = \sqrt{\langle u, u \rangle _H}\). Given a function \(\mathcal {F} : H \rightarrow ]-\infty , \infty ]\) , we call the set \(D(\mathcal {F}) : = \{ u \in H \ : \ \mathcal {F}(u) < + \infty \}\) the effective domain of \(\mathcal {F}\), and \(\mathcal {F}\) is said to be proper if \(D(\mathcal {F})\) is non-empty. Further, we say that \(D(\mathcal {F})\) is lower semi-continuous if for every \(c \in \mathbb R\), the sublevel set

$$\begin{aligned} E_cc : = \{ u \in D(\mathcal {F}) \ : \ \mathcal {F}(u) \le c \} \end{aligned}$$

is closed in H.

Given a convex proper function \(\mathcal {F} : H \rightarrow ]-\infty , \infty ]\), its subdifferential is defined by

$$\begin{aligned} \partial _H \mathcal {F} := \left\{ (u,h) \in H \times H \ : \ \mathcal {F}(u+v) - \mathcal {F}(u) \ge \langle h, v \rangle _H \ \ \forall \, v \in D(\mathcal {F}) \right\} . \end{aligned}$$

2.3 BV functions and integration by parts

We need to recall the concept of bounded variation functions and their total variation in metric graphs that we introduce in [34] since this is the framework to study the Cheeger problem.

For bounded variation functions of one variable we follow [3]. Let \(I \subset \mathbb R\) be an interval, we say that a function \(u \in L^1(I)\) is of bounded variation if its distributional derivative Du is a Radon measure on I with bounded total variation \(\vert Du \vert (I) < + \infty \). We denote by BV(I) the space of all functions of bounded variation in I. It is well known (see [3]) that given \(u \in BV(I)\) there exists \(\overline{u}\) in the equivalence class of u, called a good representative of u, with the following properties. If \(J_u\) is the set of atoms of Du, i.e., \(x \in J_u\) if and only if \(Du(\{ x \}) \not = 0\), then \(\overline{u}\) is continuous in \(I \setminus J_u\) and has a jump discontinuity at any point of \(J_u\):

$$\begin{aligned} \overline{u}(x_{-}) := \lim _{y \uparrow x}\overline{u}(y) = Du(]a,x[), \ \ \ \ \ \overline{u}(x_{+}) := \lim _{y \downarrow x}\overline{u}(y) = Du(]a,x]) \ \ \ \forall \, x \in J_u, \end{aligned}$$

where by simplicity we are assuming that \(I = ]a,b[\). Consequently,

$$\begin{aligned} \overline{u}(x_{+}) - \overline{u}(x_{-}) = Du(\{ x \}) \ \ \ \forall \, x \in J_u. \end{aligned}$$

Moreover, \(\overline{u}\) is differentiable at \({{\mathcal {L}}}^1\) a.e. point of I, and the derivative \(\overline{u}'\) is the density of Du with respect to \({{\mathcal {L}}}^1\). For \(u \in BV(I)\), the measure Du decomposes into its absolutely continuous and singular parts \(Du = D^a u + D^s u\). Then \(D^a u = \overline{u}' \ {{\mathcal {L}}}^1\). We also split \(D^su\) in two parts: the jump part \(D^j u\) and the Cantor part \(D^c u\). \(J_u\) denotes the set of atoms of Du.

It is well known (see for instance [3]) that

and also,

$$\begin{aligned} \vert Du \vert (I)= & {} \vert D^au \vert (I) + \vert D^j u \vert (I) + \vert D^c u \vert (I)\\= & {} \int _a^b \vert \overline{u}'(x) \vert \, dx + \sum _{x \in J_u} \vert \overline{u}(x_{+}) - \overline{u}(x_{-}) \vert + \vert D^c u \vert (I). \end{aligned}$$

Obviously, if \(u \in BV(I)\) then \(u \in W^{1,1}(I)\) if and only if \(D^su \equiv 0\), and in this case we have \(Du = \overline{u}' \ {{\mathcal {L}}}^1\).

A measurable subset \(E \subset I\) is a set of finite perimeter in I if \({\chi }_E \in BV(I)\), and its perimeter is defined as

$$\begin{aligned} \mathrm{Per}(E, I):= \vert D {\chi }_E \vert (I). \end{aligned}$$

The structure of sets of finite perimetrer is very simple in dimension 1, as the following proposition [3, Proposition 3.52] shows.

Proposition 2.1

If E has finite perimeter in ]ab[ and \(\vert E \cap ]a,b[ \vert > 0\), there exist and integer \(p \ge 1\) and p pairwise disjoint intervals \(J_i = [a_{2i-1}, a_{2i}] \subset \mathbb R\) such that \(E \cap ]a,b[\) is equivalent to the union of the intervals \(J_i\) and

$$\begin{aligned} \mathrm{Per}= \sharp (\{ i \in \{1,2,. \ldots , 2p \} \ : \ a_i \in ]a,b[ \}). \end{aligned}$$

From now on, when we deal with point-wise valued BV-functions we shall always use the good representative.

Given \(\textbf{z}\in W^{1,2}(]a,b[)\) and \(u \in BV(]a,b[)\), by \(\textbf{z}Du\) we mean the Radon measure in ]ab[ defined as

$$\begin{aligned} \langle \varphi , \textbf{z}Du \rangle := \int _a^b \varphi \textbf{z}\, Du \ \ \ \ \ \ \forall \, \varphi \in C_c(]a,b[). \end{aligned}$$

Note that if \(\varphi \in \mathcal {D}(]a,b[)\), then

$$\begin{aligned} \langle \varphi , \textbf{z}Du \rangle = - \int _a^b u \textbf{z}^{\prime } \varphi dx - \int _a^b u \textbf{z}\varphi ^{\prime } dx, \end{aligned}$$

which is the definition given by Anzellotti in [5].

Working as in [5, Corollary 1.6], it is easy to see that

$$\begin{aligned} \vert \textbf{z}Du \vert (B) \le \Vert \textbf{z}\Vert _{L^{\infty }(]a,b[)} \vert Du \vert (B) \quad \hbox {for all Borelian} \ B \subset ]a,b[. \end{aligned}$$
(2.1)

Then, \(\textbf{z}Du\) is absolutely continuous with respect to the measure \(\vert Du \vert \).

The following result was given in [34, Proposition 2.1]

Proposition 2.2

Let \(\textbf{z}_n \in W^{1,2}(]a,b[)\). If

$$\begin{aligned} \lim _{n \rightarrow \infty }\textbf{z}_n = \textbf{z}\quad \hbox {weakly}^* \text {in} \ L^\infty (]a,b[), \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty }\textbf{z}^{\prime }_n = \textbf{z}^{\prime } \quad \hbox {weakly in} \ L^1 (]a,b[), \end{aligned}$$

then for every \(u \in BV(]a,b[)\), we have

$$\begin{aligned} \textbf{z}_n Du \rightarrow \textbf{z}Du \quad \hbox {as measures}, \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _a^b \textbf{z}_n Du = \int _a^b\textbf{z}Du. \end{aligned}$$

We need the following integration by parts formula, which can be proved using a suitable regularization of \(u \in BV(I)\) as in the proof of [5, Theorem 1.9] (see also Theorem C.9. of [4]).

Lemma 2.3

If \(\textbf{z}\in W^{1,2}(]a,b[)\) and \(u \in BV(]a,b[)\), then

$$\begin{aligned} \int _a^b \textbf{z}Du + \int _a^b u(x) \textbf{z}^{\prime }(x) \, dx = \textbf{z}(b) u(b_{-})- \textbf{z}(a) u(a_{+}). \end{aligned}$$

Definition 2.4

We define the set of bounded variation functions in \(\Gamma \) as

$$\begin{aligned} BV(\Gamma ):= \{ u \in L^1(\Gamma ) \ : \ [u]_{\textbf{e}}\in BV(0,\ell _{\textbf{e}}) \ \hbox {for all} \ \textbf{e}\in \mathrm {E}(\Gamma ) \}. \end{aligned}$$

Given \(u \in BV(\Gamma )\), for \(\textbf{e}\in E_\mathrm {v}\), we define

$$\begin{aligned} {[}u]_\textbf{e}(\mathrm {v}) := \left\{ \begin{array}{ll} [u]_\textbf{e}(0+), \quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {i}_{\textbf{e}} \\ {[}u]_\textbf{e}(\ell _\textbf{e}-), \quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {f}_{\textbf{e}}. \end{array} \right. \end{aligned}$$

For \(u \in BV(\Gamma )\), we define

$$\begin{aligned} \vert D u \vert (\Gamma ):= \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \vert D [u]_{\textbf{e}} \vert (0,\ell _{\textbf{e}}). \end{aligned}$$

We also write

$$\begin{aligned} \vert D u \vert (\Gamma ) =\int _{\Gamma } |Du|. \end{aligned}$$

Obviously, for \(u \in BV(\Gamma )\), we have

$$\begin{aligned} \vert D u \vert (\Gamma )= 0 \ \iff \ [u]_\textbf{e}\ \hbox {is constant in} \ (0, \ell _\textbf{e}), \ \ \forall \, \textbf{e}\in E(\Gamma ). \end{aligned}$$

\(BV(\Gamma )\) is a Banach space with respect to the norm

$$\begin{aligned} \Vert u\Vert _{BV(\Gamma )}:=\Vert u \Vert _{L^1(\Gamma )} + \vert D u \vert (\Gamma ). \end{aligned}$$

Remark 2.5

Note that we do not include a continuity condition at the vertices in the definition of the spaces \(BV(\Gamma )\). This is due to the fact that, if we include the continuity at the vertices, then typical functions of bounded variation such as the functions of the form \({\chi }_D\) with \(D \subset \Gamma \) such that \(\mathrm {v}\in D\), being \(\mathrm {v}\) a common vertex to two edges, would not be elements of \(BV(\Gamma )\). \(\blacksquare \)

By the Embedding Theorem for BV-function (cf. [3, Corollary 3.49, Remark 3.30]), we have the following result.

Theorem 2.6

The embedding \(BV(\Gamma ) \hookrightarrow L^p(\Gamma )\) is continuous for \(1\le p \le \infty \), being compact for \(1 \le p < \infty \). Moreover, we also have the following Poincaré inequality:

$$\begin{aligned} \Vert u - \overline{u} \Vert _p \le C \vert D u \vert (\Gamma ) \quad \forall \, u \in BV(\Gamma ), \quad 1 \le p \le \infty , \end{aligned}$$

where

$$\begin{aligned} \overline{u}:= \frac{1}{\ell (\Gamma )} \int _\Gamma u(x) dx. \end{aligned}$$

Let us point out that in metric graphs \(\vert D u \vert (\Gamma )\) is not the good definition of total variation of u since it does not measure the jumps of the function at the vertices. In [34], in order to give a definition of total variation of a function \( u \in BV(\Gamma )\) that takes into account the jumps of the function at the vertices, we gave a Green’s formula like the one obtained by Anzellotti in [5] for BV-functions in Euclidean spaces. To do that we start by defining the pairing \(\textbf{z}Du\) between an element \(\textbf{z}\in W^{1,1}(\Gamma )\) and a BV function u. This will be a metric graph analogue of the classic Anzellotti pairing introduced in [5].

Definition 2.7

For \(\textbf{z}\in W^{1,2}(\Gamma )\) and \(u \in BV(\Gamma )\), we define \(\textbf{z}Du:= ( [\textbf{z}]_\textbf{e}, D[u_\textbf{e}])_{\textbf{e}\in E(\Gamma )} \), that is, for \(\varphi \in C_c(\Gamma )\),

$$\begin{aligned} \langle \textbf{z}Du, \varphi \rangle = \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \int _0^{\ell _{\textbf{e}}} \varphi _\textbf{e}[\textbf{z}]_\textbf{e}\, D[u]_\textbf{e}. \end{aligned}$$

We have that \(\textbf{z}Du\) is a Radon measure in \(\Gamma \) and

$$\begin{aligned} \int _\Gamma \textbf{z}Du := \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \int _0^{\ell _{\textbf{e}}} [\textbf{z}]_\textbf{e}\, D[u]_\textbf{e}. \end{aligned}$$

By (2.1) applied edgewise, we have

$$\begin{aligned} \left| \int _{\Gamma } \textbf{z}Du \right| \le \Vert \textbf{z}\Vert _{L^{\infty }(\Gamma )} \vert D u \vert (\Gamma ). \end{aligned}$$

Then, \(\textbf{z}Du\) is absolutely continuous with respect to the measure \(\vert Du \vert \).

Given \(\textbf{z}\in W^{1,2}(\Gamma )\), for \( \textbf{e}\in E_\mathrm {v}\), we define

$$\begin{aligned}{}[\textbf{z}]_\textbf{e}(\mathrm {v}):= \left\{ \begin{array}{ll}[\textbf{z}]_\textbf{e}(\ell _{\textbf{e}}) \quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {f}_\textbf{e}, \\ -[\textbf{z}]_\textbf{e}(0),\quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {i}_\textbf{e}. \end{array} \right. . \end{aligned}$$

By Lemma 2.3, we have

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du:= & {} \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \int _0^{\ell _{\textbf{e}}}[\textbf{z}]_\textbf{e}\, D[u]_\textbf{e}\\= & {} - \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} \int _0^{\ell _{\textbf{e}}}[u]_\textbf{e}(x) ([\textbf{z}]_\textbf{e})^{\prime }(x) dx+ \sum _{\textbf{e}\in \mathrm {E}(\Gamma )} ( [\textbf{z}]_\textbf{e}(\ell _{\textbf{e}}) [u]_\textbf{e}((\ell _{\textbf{e}})_{-}) - [\textbf{z}]_\textbf{e}(0) [u]_\textbf{e}(0_+) )\\= & {} - \int _\Gamma u\textbf{z}^{\prime } + \sum _{\mathrm {v}\in V(\Gamma )} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}). \end{aligned}$$

Then, if we define

$$\begin{aligned} \int _{\partial \Gamma } \textbf{z}u:=\sum _{\mathrm {v}\in V(\Gamma )} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}), \end{aligned}$$

for \(\textbf{z}\in W^{1,2}(\Gamma )\) and \(u \in BV(\Gamma )\), we have the following Green’s formula:

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du + \int _\Gamma u\textbf{z}^{\prime } = \int _{\partial \Gamma } \textbf{z}u. \end{aligned}$$
(2.2)

We define

$$\begin{aligned} X_0(\Gamma ):= \{ \textbf{z}\in W^{1,2}(\Gamma ) \ : \ \textbf{z}(\mathrm {v}) =0, \ \ \forall \mathrm {v}\in V(\Gamma )\}. \end{aligned}$$

For \(u \in BV(\Gamma )\) and \(\textbf{z}\in X_0(\Gamma )\), we have the following Green’s formula

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du + \int _\Gamma u\textbf{z}^{\prime } = 0. \end{aligned}$$
(2.3)

We consider now the elements of \(W^{1,2}(\Gamma )\) that satisfy a Kirchhoff condition, that is, the set

$$\begin{aligned} X_K(\Gamma ):= \left\{ \textbf{z}\in W^{1,2}(\Gamma ) \ : \ \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) =0, \ \ \forall \mathrm {v}\in V(\Gamma ) \right\} . \end{aligned}$$

Note that if \(\textbf{z}\in X_K(\Gamma )\), then \(\textbf{z}(\mathrm {v}) =0\) for all \(\mathrm {v}\in \partial V(\Gamma )\). Therefore, for \(u \in BV(\Gamma )\) and \(\textbf{z}\in X_K(\Gamma )\), we have the following Green’s formula

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du + \int _\Gamma u\textbf{z}^{\prime } = \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}). \end{aligned}$$
(2.4)

Now, for \(\mathrm {v}\in \mathrm{int}(V(\Gamma ))\), we have

$$\begin{aligned} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_{\hat{\textbf{e}}}(\mathrm {v}) =0, \quad \hbox {for all} \ \hat{\textbf{e}} \in E_\mathrm {v}(\Gamma ). \end{aligned}$$

Hence

$$\begin{aligned} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}) = \frac{1}{d_\mathrm {v}} \sum _{\hat{\textbf{e}}\in \mathrm {E}_\mathrm {v}(\Gamma )} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )}[\textbf{z}]_\textbf{e}(\mathrm {v}) \left( [u]_\textbf{e}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}) \right) . \end{aligned}$$

Therefore, we can rewrite Green’s formula (2.4) as

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du + \int _\Gamma u\textbf{z}^{\prime } = \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_\mathrm {v}} \sum _{\hat{\textbf{e}}\in \mathrm {E}_\mathrm {v}(\Gamma )} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )}[\textbf{z}]_\textbf{e}(\mathrm {v}) \left( [u]_\textbf{e}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}) \right) . \end{aligned}$$

Remark 2.8

Given a function u in the metric graph \(\Gamma \), we say that u is continuous at the vertex \(\mathrm {v}\), if

$$\begin{aligned}{}[u]_{\textbf{e}_1}(\mathrm {v}) = [u]_{\textbf{e}_2}(\mathrm {v}), \quad \hbox {for all} \ \textbf{e}_1, \textbf{e}_2 \in E_\mathrm {v}(\Gamma ). \end{aligned}$$

We denote this common value as \(u(\mathrm {v})\). We denote by \(C(\mathrm{int}(V(\Gamma )))\) the set of all functions in \(\Gamma \) continuous at the vertices \(\mathrm {v}\in \mathrm{int}(V(\Gamma ))\)

Note that if \(u \in BV(\Gamma ) \cap C(\mathrm{int}(V(\Gamma )))\) and \(\textbf{z}\in X_K(\Gamma )\), then by (2.4), we have

$$\begin{aligned} \int _{\Gamma } \textbf{z}Du + \int _\Gamma u\textbf{z}^{\prime } =0. \end{aligned}$$

\(\blacksquare \)

Definition 2.9

For \(u \in BV(\Gamma )\), we define its total variation as

$$\begin{aligned} TV_\Gamma (u) = \sup \left\{ \displaystyle \left| \int _{\Gamma } u(x) \textbf{z}^{\prime }(x) dx \right| \ : \ \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1 \right\} . \end{aligned}$$

We say that a measurable set \(E \subset \Gamma \) is a set of finite perimeter if \({\chi }_E \in BV(\Gamma )\), and we define its \(\Gamma \) -perimeter as

$$\begin{aligned} \mathrm{Per}_\Gamma (E):= TV_\Gamma ({\chi }_E), \end{aligned}$$

that is

$$\begin{aligned} \mathrm{Per}_\Gamma (E) = \sup \left\{ \displaystyle \left| \int _{E} \textbf{z}^{\prime }(x) dx \right| \ : \ \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1 \right\} . \end{aligned}$$
(2.5)

Remark 2.10

We have

$$\begin{aligned} \mathrm{Per}_\Gamma (E) = \mathrm{Per}_\Gamma (\Gamma \setminus E), \quad \hbox {for all} E \subset \Gamma \text {of finite perimeter}. \end{aligned}$$
(2.6)

In fact, given \(\textbf{z}\in X_K(\Gamma )\) with \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\), by Green’s formula (2.4), it is easy to see that

$$\begin{aligned} \int _\Gamma {\chi }_E\textbf{z}^{\prime }= & {} - \int _{\Gamma } \textbf{z}D {\chi }_E + \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [{\chi }_E]_\textbf{e}(\mathrm {v})\\= & {} - \int _{\Gamma } \textbf{z}D {\chi }_{\Gamma \setminus E} + \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [{\chi }_{\Gamma \setminus E}]_\textbf{e}(\mathrm {v}) = \int _\Gamma {\chi }_{\Gamma \setminus E}\textbf{z}^{\prime }. \end{aligned}$$

Thus, by (2.5), we have that (2.6) holds. \(\blacksquare \)

Remark 2.11

In the works by Del Pezzo and Rossi [19, 20] it is not clear what is their concept of functions of bounded variation on \(\Gamma \) and their total variation. They refer to the monograph [3] for the precise definition. However, in [3] only the case of functions of bounded variation in the Euclidean space is studied. Now, reading their works it seems that for them the space of the bounded variation functions in \(\Gamma \) coincides with our space \(BV(\Gamma )\), but they do not make it clear if they assume continuity at the vertices. Their total variation of \(u \in BV(\Gamma )\) is \(\vert Du \vert (\Gamma )\) which does not take into account the jumps at the vertices. \(\blacksquare \)

As a consequence of the above definition, we have the following result.

Proposition 2.12

\(TV_\Gamma \) is lower semi-continuous with respect to the weak convergence in \(L^1(\Gamma )\).

As in the local case, we have obtained in [34] the following coarea formula relating the total variation of a function with the perimeter of its superlevel sets.

Theorem 2.13

(Coarea formula) For any \(u \in L^1(\Gamma )\), let \(E_t(u):= \{ x \in \Gamma \ : \ u(x) > t \}\). Then,

$$\begin{aligned} TV_\Gamma (u) = \int _{-\infty }^{+\infty } \mathrm{Per}_\Gamma (E_t(u))\, dt. \end{aligned}$$
(2.7)

We introduce now

$$\begin{aligned} JV_\Gamma (u):= \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_{\mathrm {v}}} \sum _{\textbf{e}, \hat{\textbf{e}} \in E_\mathrm {v}(\Gamma )} \vert [u]_{\textbf{e}}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}) \vert . \end{aligned}$$

Note that \(JV_\Gamma (u)\) measures, in a weighted way, the jumps of u at the vertices. The following results were proved in [34].

Proposition 2.14

For \(u \in BV(\Gamma )\), we have

$$\begin{aligned} \vert Du \vert (\Gamma ) \le TV_\Gamma (u) \le \vert Du \vert (\Gamma ) + JV_\Gamma (u). \end{aligned}$$

If \(u \in BV(\Gamma ) \cap C(\mathrm{int}(V(\Gamma )))\), then

$$\begin{aligned} TV_\Gamma (u) =\vert Du \vert (\Gamma ). \end{aligned}$$

If \(\Gamma \) is path graph, that is \(d_\mathrm {v}=2\) for all \(\mathrm {v}\in \mathrm{int}(V(\Gamma ))\), then

$$\begin{aligned} TV_\Gamma (u) = \vert Du \vert (\Gamma ) + JV_\Gamma (u). \end{aligned}$$
(2.8)

Corollary 2.15

For \(u \in BV(\Gamma )\), we have

$$\begin{aligned} TV_\Gamma (u) =0 \ \iff \ u \ \hbox {is constant}. \end{aligned}$$

Then

$$\begin{aligned} \mathrm{Per}_\Gamma (E) =0 \ \iff \ E = \Gamma . \end{aligned}$$

In [34] we give an example showing that the equality (2.8) does not hold if \( u \not \in C(\mathrm{int}(V(\Gamma )))\) or there exists \(\mathrm {v}\in \mathrm{int}(V(\Gamma ))\) with \(d_\mathrm {v}\ge 3\).

2.4 The 1-Laplacian in metric graphs

In [34], in order to study the total variation flow in the metric graph \(\Gamma \) we have introduced the energy functional \(\mathcal {F}_\Gamma : L^2(\Gamma ) \rightarrow [0, + \infty ]\) defined by

$$\begin{aligned} \mathcal {F}_\Gamma (u):= \left\{ \begin{array}{ll} \displaystyle TV_\Gamma (u) &{}\quad \hbox {if} \ u\in BV(\Gamma ), \\ + \infty &{}\quad \hbox {if } u\in L^2(\Gamma )\setminus BV(\Gamma ), \end{array} \right. \end{aligned}$$

which is convex and lower semi-continuous, and we have obtained the following characterization of the subdifferential of \(\mathcal {F}_\Gamma \).

Theorem 2.16

Let \(u \in BV(\Gamma )\) and \(v \in L^2(\Gamma )\). The following assertions are equivalent:

  1. (i)

    \(v \in \partial \mathcal {F}_\Gamma (u)\);

  2. (ii)

    there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

    $$\begin{aligned} v = -\textbf{z}^{\prime }, \quad \hbox {that is,} \quad [v]_\textbf{e}= -[\textbf{z}]_\textbf{e}^{\prime } \ \ \hbox {in} \ \mathcal {D}^{\prime }(0, \ell _\textbf{e}) \ \forall \textbf{e}\in E(\Gamma ) \end{aligned}$$
    (2.9)

    and

    $$\begin{aligned} \int _{\Gamma } u(x) v(x) dx = \mathcal {F}_\Gamma (u); \end{aligned}$$
  3. (iii)

    there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that (2.9) holds and

    $$\begin{aligned} \mathcal {F}_\Gamma (u) = \int _{\Gamma } \textbf{z}Du - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_\mathrm {v}} \sum _{\hat{\textbf{e}}\in \mathrm {E}_\mathrm {v}(\Gamma )} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )}[\textbf{z}]_\textbf{e}(\mathrm {v}) \left( [u]_\textbf{e}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}) \right) . \end{aligned}$$

    Moreover, \(D(\partial \mathcal {F}_\Gamma )\) is dense in \(L^2(\Gamma )\).

In [34] the following space was introduced

$$\begin{aligned} G_m(\Gamma ):= \{ v \in L^2(\Gamma ) \ : \ \exists \textbf{z}\in X_K(\Gamma ), \ v = -\textbf{z}' \ \hbox {a.e. in} \Gamma \}, \end{aligned}$$

and the the following norm was considered in \(G_m(\Gamma )\)

$$\begin{aligned} \Vert v \Vert _{m,*} := \inf \{\Vert \textbf{z}\Vert _\infty \ : \textbf{z}\in X_K(\Gamma ), \ v = -\textbf{z}' \ \hbox {a.e. in} \ \Gamma \}. \end{aligned}$$

In the continuous setting this space was introduce in [39].

Note that, for \(v \in G_m(\Gamma )\), we have that there exists \(\textbf{z}_v\in X_K(\Gamma )\), such that \(v = -\textbf{z}'_v\) and \(\Vert v \Vert _{m,*} = \Vert \textbf{z}_v \Vert _\infty \) (see the proof of Theorem 2.19 in [34]).

From the proof of Theorem 2.16, for \(f \in G_m(\Gamma )\), we have

$$\begin{aligned} \Vert f \Vert _{m,*} = \sup \left\{ \left| \int _\Gamma f(x) u(x) dx \right| : u \in BV(\Gamma ), \ TV_{\Gamma }(u) \le 1\right\} , \end{aligned}$$

and, moreover,

$$\begin{aligned} \partial \mathcal {F}_\Gamma (u) = \left\{ v \in L^2(\Gamma ) \ : \ \Vert v \Vert _{m,*} \le 1, \ \int _{\Gamma } u(x) v(x) dx = TV_\Gamma (u)\right\} . \end{aligned}$$
(2.10)

Definition 2.17

We define the 1-Laplacian operator in the metric graph \(\Gamma \) as

$$\begin{aligned} (u, v ) \in \Delta _1^{\Gamma } \iff -v \in \partial \mathcal {F}_\Gamma (u), \end{aligned}$$

that is, if \(u \in L^2(\Gamma )\cap BV(\Gamma )\), \(v \in L^2(\Gamma )\) and there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

$$\begin{aligned} v = \textbf{z}^{\prime }, \quad \hbox {that is,} \quad [v]_\textbf{e}= [\textbf{z}]_\textbf{e}^{\prime } \quad \ \hbox {in} \ \mathcal {D}^{\prime }(0, \ell _\textbf{e}) \ \forall \textbf{e}\in E(\Gamma ) \end{aligned}$$

and

$$\begin{aligned} \mathcal {F}_\Gamma (u) = \int _{\Gamma } \textbf{z}Du - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_{\mathrm {v}}} \sum _{\textbf{e}, \hat{\textbf{e}} \in E_\mathrm {v}(\Gamma )} [\textbf{z}]_{\textbf{e}} (\mathrm {v})([u]_{\textbf{e}}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}). \end{aligned}$$

Remark 2.18

Let us point out that formally

$$\begin{aligned} (u, v ) \in \Delta _1^{\Gamma } \iff v = \frac{Du}{\vert D u \vert } \iff [v]_\textbf{e}= \frac{D[v]_\textbf{e}}{\vert D[v]_\textbf{e}\vert } \ \ \ \hbox {in} \ \mathcal {D}^{\prime }(0, \ell _\textbf{e}) \ \forall \textbf{e}\in E(\Gamma ). \end{aligned}$$

Then, the \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\), that appear in the characterization, represent \(\frac{Du}{\vert D u \vert }\). Note that the operator \(\Delta _1^{\Gamma }\) is multivalued. \(\blacksquare \)

3 The Cheeger problem: \(\Gamma \)-Cheeger and \(\Gamma \)-calibrable sets

Given a set \(\Omega \subset \Gamma \) with \(0< \ell (\Omega ) < \ell (X)\) and \(\mathrm{Per}_\Gamma (\Omega ) >0\), we define its \(\Gamma \) -Cheeger constant of \(\Omega \) by

$$\begin{aligned} h_1^\Gamma (\Omega ) := \inf \left\{ \frac{\mathrm{Per}_\Gamma (E)}{\ell (E)} \ : \ E \subset \Omega , \ \, \ell ( E)>0 \right\} . \end{aligned}$$
(3.1)

A set \(E \subset \Omega \) achieving the infimum in (3.1) is said to be an \(\Gamma \)-Cheeger set of \(\Omega \). Furthermore, we say that \(\Omega \) is \(\Gamma \)-calibrable if it is an \(\Gamma \)-Cheeger set of itself, that is, if

$$\begin{aligned} h_1^\Gamma (\Omega ) = \frac{\mathrm{Per}_\Gamma (\Omega )}{\ell (\Omega )}. \end{aligned}$$

For ease of notation, we will denote

$$\begin{aligned} \lambda ^\Gamma _\Omega := \frac{\mathrm{Per}_\Gamma (\Omega )}{\ell (\Omega )}, \end{aligned}$$

for any set \(\Omega \subset \Gamma \) with \(0<\ell (\Omega )\).

Note that \(\Omega \) is \(\Gamma \)-calibrable if and only if \(\Omega \) minimizes of the functional

$$\begin{aligned} \mathrm{Per}_\Gamma (E) - \lambda ^\Gamma _\Omega \ell (E) \end{aligned}$$

on the set \(E \subset \Omega \), with \(\ell (E) >0\).

It is well known (see for instance [2]) that in \(\mathbb R^N\), any Euclidean ball is a calibrable set. It is easy to see that this also happen in path graph, that is \(d_\mathrm {v}=2\) for all \(\mathrm {v}\in \mathrm{int}(V(\Gamma ))\). Let us see in the next example that this is not true, in general, in metric graphs.

Example 3.1

Consider the metric graph \(\Gamma \) with fourth vertices and three edges, that is \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2, \mathrm {v}_3, \mathrm {v}_4 \}\) and \(E(\Gamma ) = \{ \textbf{e}_1:=[\mathrm {v}_1, \mathrm {v}_2], \textbf{e}_2:=[\mathrm {v}_2, \mathrm {v}_3], \textbf{e}_3:=[\mathrm {v}_3, \mathrm {v}_4] \}\), with \(\ell _{\textbf{e}_1} =2\), \(\ell _{\textbf{e}_i} =1\), \(i=2,3\).

figure a

Consider the ball \(B\left( \mathrm {v},\frac{5}{8} \right) \), being \(\mathrm {v}= c^{-1}_{\textbf{e}_1}(\frac{3}{2})\). Then,

$$\begin{aligned} \lambda ^\Gamma _{B\left( \mathrm {v},\frac{5}{8} \right) }:= \frac{\mathrm{Per}_\Gamma (B\left( \mathrm {v},\frac{5}{8} \right) )}{\ell \left( B\left( \mathrm {v},\frac{5}{8} \right) \right) } = \frac{3}{\frac{5}{8}+ \frac{1}{2} +\frac{2}{8}} = \frac{24}{11}. \end{aligned}$$

Now, by (2.4), we have

$$\begin{aligned}&\mathrm{Per}_\Gamma \left( B\left( \mathrm {v},\frac{1}{2} \right) \right) = TV_\Gamma \left( {\chi }_{B\left( \mathrm {v},\frac{1}{2} \right) } \right) = \sup \left\{ \left| \int _\Gamma u\textbf{z}^{\prime } \right| \ : \ \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _\infty \le 1 \right\} \\&\quad = \sup \left\{ \left| - \int _{\Gamma } \textbf{z}D{\chi }_{B\left( \mathrm {v},\frac{1}{2} \right) } + \sum _{\textbf{e}\in \mathrm {E}_{\mathrm {v}_2}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}_2) [u]_\textbf{e}(\mathrm {v}_2) \right| \ : \ \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _\infty \le 1 \right\} \\&\quad = \sup \left\{ \left| [\textbf{z}]_{\textbf{e}_1}( c^{-1}_{\textbf{e}_1}(1)) + [\textbf{z}]_{\textbf{e}_1}(\mathrm {v}_2) \right| \ : \ \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _\infty \le 1 \right\} =2\\&\quad \qquad \lambda ^\Gamma _{B\left( \mathrm {v},\frac{1}{2} \right) } := \frac{\mathrm{Per}_\Gamma (B\left( \mathrm {v},\frac{1}{2} \right) )}{\ell \left( B\left( \mathrm {v},\frac{1}{2} \right) \right) } = 2. \end{aligned}$$

Therefore, the ball \(B\left( \mathrm {v},\frac{5}{8} \right) \) is not calibrable.

It is easy to see that if \(E \subset \Omega := B\left( \mathrm {v},\frac{5}{8} \right) \), with \(\ell (E) >0\), then \(\mathrm{Per}_\Gamma (E) \ge 2\), being \(\mathrm{Per}_\Gamma (E) = 2\) if \(E \subset \textbf{e}_i\). Now, the subset \(E \subset \Omega \) with greater volume is \(E:= [c^{-1}_{\textbf{e}_1}(\frac{7}{8}), \mathrm {v}_2]\). Therefore,

$$\begin{aligned} h_1^\Gamma (\Omega ) =\frac{\mathrm{Per}_\Gamma (E)}{\ell (E)} = \frac{2}{\frac{9}{8}} = \frac{16}{9} <2. \end{aligned}$$

Then, we have that E is the \(\Gamma \)-Cheeger set of \(\Omega \). \(\blacksquare \)

Theorem 3.2

Let \(\Omega \subset \Gamma \) with \(\mathrm{Per}_\Gamma (\Omega ) >0\) and \(\ell (\Omega )>0\). There exists a Cheeger set of \(\Omega \).

Proof

Let \(E_n \subset \Omega \) with \(\ell ( E_n)>0\), such that

$$\begin{aligned} h_1^\Gamma (\Omega ) = \lim _{n \rightarrow \infty } \frac{\mathrm{Per}_\Gamma (E_n)}{\ell (E_n)}. \end{aligned}$$

By the Embedding Theorem (Theorem 2.6), taking a subsequence if necessary, we have that there exists \(E \subset \Gamma )\), such that

$$\begin{aligned} {\chi }_E = \lim _{n \rightarrow \infty } {\chi }_{E_n} \quad \hbox {in } \ L^1(\Gamma ) \ \hbox {and a.e}. \end{aligned}$$

Then, by the lower semi-continuity of the total variation (Corollary 2.12), we have

$$\begin{aligned} \mathrm{Per}_\Gamma (E) \le \liminf _{n \rightarrow \infty } \mathrm{Per}_\Gamma (E_n). \end{aligned}$$

Therefore

$$\begin{aligned} h_1^\Gamma (\Omega ) = \frac{\mathrm{Per}_\Gamma (E)}{\ell (E)}. \end{aligned}$$

\(\square \)

Remark 3.3

Let \(\Omega \subset \Gamma \) with \(\mathrm{Per}_\Gamma (\Omega ) >0\) and \(\ell (\Omega )>0\). Then, if there exist \(\lambda >0\) and a function \(\xi : \Gamma \rightarrow \mathbb R\) such that \(\xi (x) = 1\) for all \(x \in \Omega \), satisfying

$$\begin{aligned} - \lambda \xi \in \Delta ^\Gamma _1 {\chi }_\Omega , \quad \hbox {in} \ \Gamma , \end{aligned}$$

then

$$\begin{aligned} \lambda = \lambda ^\Gamma _\Omega . \end{aligned}$$

In fact, we have that there exists \(\textbf{z}\in X(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

$$\begin{aligned} -\lambda \xi = \textbf{z}^{\prime }, \quad \mathcal {F}_\Gamma ({\chi }_\Omega ) = \int _{\Gamma } \textbf{z}D{\chi }_\Omega - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_{\mathrm {v}}} \sum _{\textbf{e}, \hat{\textbf{e}} \in E_\mathrm {v}(\Gamma )} [\textbf{z}]_{\textbf{e}} (\mathrm {v})([{\chi }_\Omega ]_{\textbf{e}}(\mathrm {v}) - [{\chi }_\Omega ]_{\hat{\textbf{e}}}(\mathrm {v}). \end{aligned}$$

Then, applying Green’s formula (2.3), we have

$$\begin{aligned} \lambda \ell (\Omega )= & {} \int _\Gamma {\chi }_\Omega \lambda \xi dx = -\int _\Gamma {\chi }_\Omega \textbf{z}' dx = \int _\Gamma \textbf{z}D{\chi }_\Omega - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [{\chi }_\Omega ]_\textbf{e}(\mathrm {v})\\= & {} \mathcal {F}_\Gamma ({\chi }_\Omega ) = \mathrm{Per}_\Gamma (\Omega ). \end{aligned}$$

It is well known (see [28]) that the classical Cheeger constant

$$\begin{aligned} h_1(\Omega ):= \inf \left\{ \frac{Per(E)}{\vert E \vert } \, : \, E\subset \Omega , \ \vert E \vert >0 \right\} , \end{aligned}$$

for a bounded smooth domain \(\Omega \subset \mathbb R^N\), is an optimal Poincaré constant, namely, it coincides with the first eigenvalue of the 1-Laplacian:

$$\begin{aligned} h_1(\Omega )=\Lambda _1(\Omega ):= \inf \left\{ \frac{\displaystyle \int _\Omega \vert Du \vert +\displaystyle \int _{\partial \Omega } \vert u \vert d \mathcal {H}^{N-1}}{ \displaystyle \Vert u \Vert _{L^1(\Omega )}} \, : \, u \in BV(\Omega ), \ \Vert u \Vert _\infty = 1 \right\} . \end{aligned}$$

In order to get, in our context, a version of this result, we introduce the following constant. For \(\Omega \subset \Gamma \) with \(0<\ell (\Omega )< \ell (\Gamma )\), we define

$$\begin{aligned} \begin{array}{lll} \displaystyle \Lambda _1^\Gamma (\Omega )= \inf \left\{ TV_\Gamma (u) : u \in BV(\Gamma ), \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega , \ u \ge 0, \ \int _\Gamma u(x) d(x) = 1, TV_\Gamma (u)>0 \right\} \\ \displaystyle {\Lambda _1^\Gamma (\Omega )} = \inf \left\{ \frac{ TV_\Gamma (u)}{\displaystyle \int _\Gamma u(x) dx} \ : \ u \in BV(\Gamma ), \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega ,\ u \ge 0, \ u\not \equiv 0, TV_\Gamma (u) >0 \right\} . \end{array} \end{aligned}$$
(3.2)

Theorem 3.4

Let \(\Omega \subset X\) with \(0< \ell (\Omega ) < \ell (X)\). Then,

$$\begin{aligned} h_1^\Gamma (\Omega ) = \Lambda _1^\Gamma (\Omega ). \end{aligned}$$
(3.3)

Proof

Given a subset \(E \subset \Omega \) with \(\ell (E )> 0\), we have

$$\begin{aligned} \frac{ TV_\Gamma ({\chi }_E)}{\Vert {\chi }_E \Vert _{L^1(X, \nu )}} = \frac{\mathrm{Per}_\Gamma (E)}{\ell (E)}. \end{aligned}$$

Therefore,

$$\begin{aligned} \Lambda _1^\Gamma (\Omega ) \le h_1^\Gamma (\Omega ). \end{aligned}$$

Suppose the another inequality does not holds. Then, there exists \(u \in BV(\Gamma )\), \(u= 0 \ \hbox {in} \ \Gamma \setminus \Omega \), \(u \ge 0\), \(u\not \equiv 0\), \(TV_\Gamma (u) >0\), such that

$$\begin{aligned} \frac{ TV_\Gamma (u)}{\displaystyle \int _\Gamma u(x) dx} < h_1^\Gamma (\Omega ). \end{aligned}$$

Then, by the coarea formula (2.7) and the Cavalieri’s Principle, we obtain

$$\begin{aligned} 0 > TV_\Gamma (u) - h_1^\Gamma (\Omega ) \int _\Gamma u(x) dx =\int _0^\infty \left( \mathrm{Per}_\Gamma (E_t(u)) - h_1^\Gamma (\Omega ) \ell (E_t(u)) \right) dt \ge 0, \end{aligned}$$

which is a contradiction, and consequently \(\Lambda _1^\Gamma (\Omega ) = h_1^\Gamma (\Omega )\). \(\square \)

Let us point out that a the equality (3.3) was obtained in [20, Theorem 6.2], but using a different concept of total variation and therefore of perimeter (see Remark 2.11).

Remark 3.5

we are going to give a characterization of the solutions of the Euler-Lagrange equation of the variational problem (3.2). We denote by

$$\begin{aligned} K_\Omega := \left\{ u \in BV(\Gamma ), \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega , \ u \ge 0, \ \int _\Gamma u(x) d(x) = 1, \ TV_\Gamma (u) >0 \right\} , \end{aligned}$$

and \(I_{K_\Omega }\) is the indicator function of \(K_\Omega \), defined by

$$\begin{aligned} I_{K_\Omega }(u) := \left\{ \begin{array}{ll} 0, &{}\quad \hbox {if} \ \ u \in K_\Omega , \\ \infty , &{}\quad \hbox {if } \ \ u \not \in K_\Omega .\end{array} \right. \end{aligned}$$

Then,

$$\begin{aligned}&\inf \left\{ TV_\Gamma (u) \ : \ u \in BV(\Gamma ), \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega , \ u \ge 0, \ \int _\Gamma u(x) d(x) = 1, \ TV_\Gamma (u) >0 \right\} \\&\quad = \inf \left\{ \mathcal {F}_\Gamma (u)+ I_{K_\Omega }(u) \ : \ u \in L^2(\Gamma )\right\} . \end{aligned}$$

Therefore, u is a minimizer of (3.2) if and only if \(0 \in \partial (\mathcal {F}_\Gamma + I_{K_\Omega })(u) = \partial \mathcal {F}_\Gamma (u) + \partial I_{K_\Omega }(u),\) where the last equality is consequence of [9, Corollary 2.11]. Then, u is a minimizer of (3.2) if and only if, \(u \in K_\Omega \) and there exists \(v \in \partial \mathcal {F}_\Gamma (u)\) such that \(- v \in \partial I_{K_\Omega }(u)\), that is, \(\int _\Gamma uv dx \le \int _\Gamma wv dx\) for all \(w \in K_\Omega \). Now by Theorem 2.16, we have \(v \in \partial \mathcal {F}_\Gamma (u)\) if and only if there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

$$\begin{aligned} v= & {} -\textbf{z}^{\prime } \quad and \int _{\Gamma } u(x) v(x) dx = \mathcal {F}_\Gamma (u) \\= & {} \int _{\Gamma } \textbf{z}Du - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_\mathrm {v}} \sum _{\hat{\textbf{e}}\in \mathrm {E}_\mathrm {v}(\Gamma )} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )}[\textbf{z}]_\textbf{e}(\mathrm {v}) \left( [u]_\textbf{e}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}) \right) . \end{aligned}$$

Consequently, we have that u is a minimizer of (3.2) if and only if \(u \in K_\Omega \) and there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

$$\begin{aligned} TV_\Gamma (u) \le \int _\Gamma \textbf{z}Dw - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_\mathrm {v}} \sum _{\hat{\textbf{e}}\in \mathrm {E}_\mathrm {v}(\Gamma )} \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )}[\textbf{z}]_\textbf{e}(\mathrm {v}) \left( [w]_\textbf{e}(\mathrm {v}) - [w]_{\hat{\textbf{e}}}(\mathrm {v}) \right) ,\quad \forall w \in K_\Omega . \end{aligned}$$

\(\blacksquare \)

The Max-Flow Min-Cut Theorem on networks due to Ford and Fulkenerson [24], in the continuous case was first studied by Strang [44] in the particular case of the plane. Given a bounded, planar domain \(\Omega \), and given two functions \(F,c : \Omega \rightarrow \mathbb R\), we want to find the maximal value of \(\lambda \in \mathbb R\) such that there exists a vector field \(V :\Omega \rightarrow \mathbb R^2\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} \mathrm{div} \, V = \lambda F \\ \Vert V \Vert _\infty \le c. \end{array} \right. \end{aligned}$$

The problem can be interpreted as follows: given a source or sink term F, we want to find the maximal flow in \(\Omega \) under the capacity constraint given by c. It turns out that if \(F = 1\) and \(c = 1\), then the maximal value of \(\lambda \) is equal to the Cheeger constant of \(\Omega \), while the boundary of a Cheeger set is the associated minimal cut (see [26, 45]). Let us see now that a similar result also holds in metric graphs.

Theorem 3.6

Let \(\Omega \subset X\) with \(0< \ell (\Omega ) < \ell (X)\). Then,

$$\begin{aligned} \begin{array}{lll} h_1^\Gamma (\Omega ) &{}= \sup \{ h \in \mathbb R^+ \ : \ \exists \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _\infty \le 1, \ \textbf{z}' \ge h \ \hbox {in} \ \Omega \} \\ &{}= \sup \left\{ \frac{1}{\Vert \textbf{z}\Vert _\infty } \ : \ \textbf{z}\in X_K(\Gamma ), \ \textbf{z}' = {\chi }_\Omega \right\} \\ &{}= \sup \left\{ \frac{1}{\Vert \textbf{z}\Vert _\infty } \ : \ \textbf{z}\in X_K(\Gamma ), \ \textbf{z}' = 1 \ \hbox {in} \ \Omega \right\} . \end{array} \end{aligned}$$

Proof

Let

$$\begin{aligned} B:= \{ h \in \mathbb R^+ \ : \ \exists \textbf{z}\in X_K(\Gamma ), \ \Vert \textbf{z}\Vert _\infty \le 1, \ \textbf{z}' \ge h \ \hbox {in} \ \Omega \}, \end{aligned}$$

and

$$\begin{aligned} \alpha := \sup B. \end{aligned}$$

Given \(h \in B\) and \(E \subset \Omega \) with \(\ell (E) >0\), applying (2.5), we have

$$\begin{aligned} h \ell (E) = \int _E h dx \le \int _E \textbf{z}' dx \le \mathrm{Per}_\Gamma (E). \end{aligned}$$

Hence,

$$\begin{aligned} h \le \frac{ \mathrm{Per}_\Gamma (E)}{\ell (E)}. \end{aligned}$$

Then, taking the supremum in h and the infimum in E, we obtain that \(\alpha \le h_1^\Gamma (\Omega )\).

On the other hand, by Theorem 3.4, it is easy to see that

$$\begin{aligned} \frac{1}{ h_1^\Gamma (\Omega )}= & {} \sup \left\{ \frac{\int _\Gamma u(x) dx}{\Vert u' \Vert _{L^1(\Gamma )} } : u \in W^{1,1}(\Gamma ), \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega ,\ u \ge 0, \ u\not \equiv 0, \Vert u' \Vert _{L^1(\Gamma )} >0 \right\} \\= & {} \sup \left\{ \int _\Omega u(x) dx : u \in W^{1,1}(\Gamma ), \ \Vert u' \Vert _{L^1(\Gamma )} \le 1, \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega ,\ u \ge 0, \ u\not \equiv 0 \right\} \\= & {} - \inf \left\{ -\int _\Omega u(x) dx : u \in W^{1,1}(\Gamma ), \ \Vert u' \Vert _{L^1(\Gamma )} \le 1, \ u= 0 \ \hbox {in} \ \Gamma \setminus \Omega ,\ u \ge 0, \ u\not \equiv 0 \right\} . \end{aligned}$$

Then,

$$\begin{aligned} - \frac{1}{ h_1^\Gamma (\Omega )} = \inf \left\{ F(u)+ G(L(u)) \ : \ u \in L^1(\Gamma ) \right\} , \end{aligned}$$

being \(L : W^{1,1}(\Gamma ) \rightarrow L^1(\Gamma )\) the linear map \(L(u) :=u'\), \(F(u):= -\int _\Gamma u {\chi }_\Omega dx\) and \(G: L^1(\Gamma ) \rightarrow [0, +\infty ]\) the convex function

$$\begin{aligned} G(v):= \left\{ \begin{array}{ll} 0 &{}\quad \hbox {if} \ \Vert v \Vert _{L^1(\Gamma )} \le 1, \\ + \infty &{}\quad \hbox {otherwise}. \end{array} \right. \end{aligned}$$

By the Fenchel–Rockafellar duality Theorem given in [22, Remark 4.2], we have

$$\begin{aligned} \inf \left\{ F(u)+ G(L(u)) \ : \ u \in L^1(\Gamma ) \right\}= & {} \sup \left\{ -G^*(-\textbf{z}) - F^*(L^*(\textbf{z})) : \textbf{z}\in L^\infty (\Gamma ) \right\} \\= & {} - \inf \left\{ F^*(L^*(\textbf{z})) + G^*(-\textbf{z}) : \textbf{z}\in L^\infty (\Gamma ) \right\} . \end{aligned}$$

Now, \(L^*(\textbf{z}) = - \textbf{z}'\), \(G^*(\textbf{z}) = \Vert \textbf{z}\Vert _{L^\infty (\Gamma )}\) and

$$\begin{aligned} F^*(w) = \sup _{u \in L^1(\Gamma )} \left\{ \int w u dx +\int _\Gamma u {\chi }_\Omega dx \ : \ u \in L^1(\Gamma ) \right\} . \end{aligned}$$

Hence

$$\begin{aligned} F^*(L^*(\textbf{z})) = \sup _{u \in L^1(\Gamma )} \left\{ -\int \textbf{z}' u dx +\int _\Gamma u {\chi }_\Omega dx \ : \ u \in L^1(\Gamma ) \right\} . \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{ h_1^\Gamma (\Omega )} = \inf \left\{ \Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \ : \ \textbf{z}\in L^\infty (\Gamma ), \ \textbf{z}' = {\chi }_\Omega \right\} , \end{aligned}$$

from where it follows that

$$\begin{aligned} h_1^\Gamma (\Omega )= & {} \sup \left\{ \frac{1}{\Vert \textbf{z}\Vert _{L^\infty (\Gamma )}} \ : \ \textbf{z}\in L^\infty (\Gamma ), \ \textbf{z}' = {\chi }_\Omega \right\} \\\le & {} \sup \left\{ \frac{1}{\Vert \textbf{z}\Vert _\infty } \ : \ \textbf{z}\in X_K(\Gamma ), \ \textbf{z}' = 1 \ \hbox {in} \ \Omega \right\} \le \alpha , \end{aligned}$$

and we finish the proof. \(\square \)

Let us recall that, in the local case, a set \(\Omega \subset \mathbb R^N\) is called calibrable if

$$\begin{aligned} \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } = h(\Omega ):= \inf \left\{ \frac{\text{ Per }(E)}{\vert E\vert } \ : \ E \subset \Omega , \ E \ \hbox { with finite perimeter,} \ \vert E \vert > 0 \right\} . \end{aligned}$$

The following characterization of convex calibrable sets is proved in [2].

Theorem 3.7

([2]) Given a bounded convex set \(\Omega \subset \mathbb R^N\) of class \(C^{1,1}\), the following assertions are equivalent:

  1. (a)

    \(\Omega \) is calibrable.

  2. (b)

    \({\chi }_\Omega \) satisfies \(- \Delta _1 {\chi }_\Omega = \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } {\chi }_\Omega \), where \(\Delta _1 u:= \mathrm{div} \left( \frac{Du}{\vert Du \vert }\right) \).

Remark 3.8

By (2.10), we have

$$\begin{aligned} - \lambda ^\Gamma _\Omega {\chi }_\Omega \in \Delta _1^\Gamma {\chi }_\Omega \iff \Vert \lambda ^\Gamma _\Omega {\chi }_\Omega \Vert _{m,*} \le 1, \ \hbox {and} \int _\Gamma \lambda ^\Gamma _\Omega {\chi }_\Omega {\chi }_\Omega = TV_\Gamma ({\chi }_\Omega ). \end{aligned}$$

Now

$$\begin{aligned} \int _\Gamma \lambda ^\Gamma _\Omega {\chi }_\Omega {\chi }_\Omega = \lambda ^\Gamma _\Omega \ell (\Omega ) = TV_\Gamma ({\chi }_\Omega ). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \begin{array}{ll} - \lambda ^\Gamma _\Omega {\chi }_\Omega \in \Delta _1^\Gamma {\chi }_\Omega \iff \Vert \lambda ^\Gamma _\Omega {\chi }_\Omega \Vert _{m,*} \le 1\\ \iff \sup \left\{ \left| \displaystyle \int _\Omega u(x) dx \right| : u \in BV(\Gamma ), \ TV_{\Gamma }(u) \le 1 \right\} \le \frac{\ell (\Omega )}{\mathrm{Per}_\Gamma (\Omega )}. \end{array} \end{aligned}$$
(3.4)

\(\blacksquare \)

In order to get a similar result to Theorem 3.7 we need the following concept of convexity.

Definition 3.9

We say that \(\Omega \subset \Gamma \) is path-convex if for any \(E \varsubsetneq \Gamma \) with \(\ell (E) >0\),

$$\begin{aligned} \text{ Per}_\Gamma (\Omega \cap E) \le \text{ Per}_\Gamma (E). \end{aligned}$$

We have the following version of Theorem 3.7.

Theorem 3.10

Let \(\Omega \subset \Gamma \) be with \(0<\ell (\Omega )<\ell (\Gamma )\). We have:

  1. (i)

    If \({\chi }_\Omega \) satisfies

    $$\begin{aligned} - \lambda ^\Gamma _\Omega {\chi }_\Omega \in \Delta _1^\Gamma {\chi }_\Omega \, \quad \hbox {in} \ \Gamma , \end{aligned}$$
    (3.5)

    then \(\Omega \) is \(\Gamma \)-calibrable.

  2. (ii)

    If \(\Omega \) is path-convex and \(\Omega \) is \(\Gamma \)-calibrable, then Eq. (3.5) holds.

Proof

  1. (i)

    For any \(E \subset \Omega \) with \(\ell ( E)>0\), applying (3.4) with \(u:= \frac{{\chi }_E}{\mathrm{Per}_\Gamma (E)}\), we have

    $$\begin{aligned} \int _\Omega \frac{{\chi }_E}{\mathrm{Per}_\Gamma (E)} \le \frac{\ell (\Omega )}{\mathrm{Per}_\Gamma (\Omega )}. \end{aligned}$$

    Then,

    $$\begin{aligned} \frac{\mathrm{Per}_\Gamma (\Omega )}{\ell (\Omega )} \le \frac{\mathrm{Per}_\Gamma (E)}{\ell (E)}, \end{aligned}$$

    and consequently \(\Omega \) is \(\Gamma \)-calibrable.

  2. (ii)

    Let us prove that the function \(f := \lambda ^\Gamma _\Omega {\chi }_\Omega \) satisfies \(\Vert f \Vert _{m,*} \le 1\). Indeed, if \(w \in BV(\Gamma ) \cap L^2(\Gamma )\) is nonnegative, by the coarea formula, we have

    $$\begin{aligned} \int _\Gamma f(x) w(x) dx= & {} \int _0^\infty \int _\Gamma \lambda ^\Gamma _\Omega {\chi }_\Omega {\chi }_{E_t(w)} dx dt = \int _0^\infty \lambda ^\Gamma _\Omega \, \ell (\Omega \cap E_t(w)) dt\\\le & {} \int _0^\infty \mathrm{Per}_\Gamma (\Omega \cap E_t(w)) dt \le \int _0^\infty \mathrm{Per}_\Gamma (E_t(w)) dt = TV_\Gamma (w). \end{aligned}$$

Splitting any function \(w \in BV(\Gamma )\) into its positive and negative part, using the above inequality one can prove that

$$\begin{aligned} \left| \int _\Gamma f(x) w(x) dx \right| \le TV_\Gamma (w), \end{aligned}$$

from where it follows that \(\Vert f \Vert _{m,*} \le 1\). Then, by (3.4), we have that \({\chi }_\Omega \) satisfies (3.5) \(\square \)

Remark 3.11

  1. (i)

    Note that in Eq. (3.5) we can change \({\chi }_\Omega \) for a function \(\xi \) such that \(\xi (x) = 1\) for every \(x \in \Omega \).

  2. (ii)

    Let us see that this assumption \(\Omega \) path-convex is necessary for (ii). For that let us give an example of a set \(\Gamma \)-calibrable not path-convex that verifies (3.5) but not (ii).

Consider the metric graph \(\Gamma \) with two vertices and one edges, that is \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2 \}\) and \(E(\Gamma ) = \{ \textbf{e}:=[\mathrm {v}_1, \mathrm {v}_2] \}\), with \(\ell _{\textbf{e}} =5\). Let \(\Omega := [c_\textbf{e}^{-1}(1), c_\textbf{e}^{-1}(2)] \cup [c_\textbf{e}^{-1}(3), c_\textbf{e}^{-1}(4)]\)

figure b

If \(E \subset \Omega \), with \(\mathrm{Per}_\Gamma (E)>0, \ell ( E)>0\), then obviously, \(\mathrm{Per}_\Gamma (E) \ge \mathrm{Per}_\Gamma (\Omega )\). Hence, \(\Omega \) is \(\Gamma \)-calibrable. On the other hand, if \(E:= [c_\textbf{e}^{-1}(1), \mathrm {v}_2]\), we have

$$\begin{aligned} \mathrm{Per}_\Gamma (\Omega \cap E) = 4 > \mathrm{Per}_\Gamma ( E) =1. \end{aligned}$$

Thus, \(\Omega \) is not path-convex. Now by [8, Theorem 2.11] (see also [12]), \({\chi }_\Omega \) does not satisfy

$$\begin{aligned} - \lambda ^\Gamma _\Omega {\chi }_\Omega \in \Delta _1^\Gamma {\chi }_\Omega \, \quad \hbox {in} \ \Gamma , \end{aligned}$$
(3.6)

since if Eq. (3.6) has a solution, then, \(\Omega \) must be of the form \(\Omega = [\textbf {a}, \textbf{b}]\).

A celebrated result of De Giorgi [18] states that, if E is a set of finite perimeter in \(\mathbb R^N\) , and \(E^*\) is a ball such that \(|E^*| = |E|\), then \(\mathrm{Per}(E^*) \le \mathrm{Per}(E)\), with equality holding if and only if E is itself a ball. This implies that

$$\begin{aligned} h(\Omega ^*) \le h(\Omega ). \end{aligned}$$

In the next example we will see that this isoperimetric inequality is not true in metric graphs.

Example 3.12

Consider the metric graph \(\Gamma \) of the Example 3.1, that is, \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2, \mathrm {v}_3, \mathrm {v}_4 \}\) and \(E(\Gamma ) = \{ \textbf{e}_1:=[\mathrm {v}_1, \mathrm {v}_2], \textbf{e}_2:=[\mathrm {v}_2, \mathrm {v}_3], \textbf{e}_3:=[\mathrm {v}_3, \mathrm {v}_4] \}\), with \(\ell _{\textbf{e}_1} =2\), \(\ell _{\textbf{e}_i} =1\), \(i=2,3\). If \(E:= [\mathrm {v}_1, c_{\textbf{e}_1}^{-1}(\frac{3}{2})]\), we have \(\ell (E) = \frac{3}{2} = \ell B \left( \mathrm {v}_2, \frac{1}{2} \right) \). Now,

$$\begin{aligned} \mathrm{Per}_\Gamma (E) =1, \quad \hbox {and} \quad \mathrm{Per}_\Gamma \left( B\left( \mathrm {v}_2, \frac{1}{2} \right) \right) =3. \end{aligned}$$

\(\blacksquare \)

In this section we introduce the eigenvalue problem associated with the operator \(-\Delta ^\Gamma _1\) and its relation with the Cheeger minimization problem.

Recall that

$$\begin{aligned} \mathrm{sign}(u)(x):= \left\{ \begin{array}{lll} 1 &{}\quad \hbox {if} \ \ u(x) > 0, \\ -1 &{}\quad \hbox {if} \ \ u(x) < 0, \\ \left[ -1,1\right] &{}\quad \hbox {if} \ \ \ u = 0. \end{array}\right. \end{aligned}$$

Definition 3.13

A pair \((\lambda , u) \in \mathbb R\times BV(\Gamma )\) is called an \(\Gamma \) -eigenpair of the operator \(-\Delta ^\Gamma _1\) on X if \(\Vert u \Vert _{L^1(\Gamma )} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) (i.e., \(\xi (x) \in \mathrm{sign}(u(x))\) for every \(x\in \Gamma \)) such that

$$\begin{aligned} \lambda \, \xi \in \partial \mathcal {F}_\Gamma (u) = - \Delta ^\Gamma _1 u. \end{aligned}$$

The function u is called an \(\Gamma \)-eigenfunction and \(\lambda \) an \(\Gamma \)-eigenvalue associated to u.

Observe that, if \((\lambda , u)\) is an \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\), then \((\lambda , - u)\) is also an \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\).

Remark 3.14

By Theorem 2.16, the following statements are equivalent:

  1. (1)

    \((\lambda , u)\) is an \(\Gamma \)-eigenpair of the operator \(-\Delta ^\Gamma _1\) .

  2. (2)

    \(u \in BV(\Gamma )\), \(\Vert u \Vert _{L^1(\Gamma )} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) and \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that

    $$\begin{aligned} \lambda \xi = -\textbf{z}^{\prime }, \end{aligned}$$
    (3.7)

    and

    $$\begin{aligned} \lambda = TV_\Gamma (u); \end{aligned}$$
    (3.8)
  3. (3)

    \(u \in BV(\Gamma )\), \(\Vert u \Vert _{L^1(\Gamma )} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) and there exists \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) such that (3.7) holds and

    $$\begin{aligned} TV_\Gamma (u) = \int _{\Gamma } \textbf{z}Du - \sum _{\mathrm {v}\in \mathrm{int}(V(\Gamma ))} \frac{1}{d_{\mathrm {v}}} \sum _{\textbf{e}, \hat{\textbf{e}} \in E_\mathrm {v}(\Gamma )} [\textbf{z}]_{\textbf{e}} (\mathrm {v})([u]_{\textbf{e}}(\mathrm {v}) - [u]_{\hat{\textbf{e}}}(\mathrm {v}). \end{aligned}$$
    (3.9)

    \(\blacksquare \)

Proposition 3.15

Let \((\lambda , u)\) be an \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\). Then,

  1. (i)

    \(\lambda = 0 \ \iff \ u \ \text {is constant, that is,} u= \frac{1}{\ell (\Gamma )}, \text {or} u=-\frac{1}{\ell (\Gamma )}\).

  2. (ii)

    \(\lambda \not = 0 \ \iff \) there exists \(\xi \in \mathrm{sign}(u)\) such that \(\displaystyle \int _\Gamma \xi (x) dx = 0.\)

Proof

  1. (i)

    By (3.8), if \(\lambda = 0\), we have that \(TV_\Gamma (u) =0\) and then, by (2.15), we get that u is constant. Thus, since \(\Vert u\Vert _{L^1(\Gamma )}=1\), either \(u= \frac{1}{\ell (\Gamma )}\), or \(u=-\frac{1}{\ell (\Gamma )}\). Similarly, if u is constant a.e. then \(TV_\Gamma (u) =0\) and, by (3.8), \(\lambda =0\).

  2. (ii)

    (\(\Longleftarrow \)) If \(\lambda =0\), by (i), we have that \(u= \frac{1}{\ell (\Gamma )}\), or \(u=-\frac{1}{\ell (\Gamma )}\), and this is a contradiction with the existence of \(\xi \in \mathrm{sign}(u)\) such that \( \int _\Gamma \xi (x) dx = 0\). (\(\Longrightarrow \)) By Remark 3.14 there exists \(\xi \in \mathrm{sign}(u)\) and \(\textbf{z}\in X_K(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) satisfying (3.7), (3.7) and (3.9). Hence, by Green’s formula (2.4), we have

    $$\begin{aligned} \lambda \int _\Gamma \xi (x) dx = - \int _\Gamma \textbf{z}' =0 \end{aligned}$$

    Therefore, since \(\lambda \not = 0\),

    $$\begin{aligned} \int _\Gamma \xi (x) dx =0. \end{aligned}$$

    \(\square \)

Recall that, given a function \(u : \Gamma \rightarrow \mathbb R\), \(\mu \in \mathbb R\) is a median of u with respect to a measure \(\ell \) if

$$\begin{aligned} \ell (\{ x \in \Gamma \ : \ u(x) < \mu \}) \le \frac{1}{2} \ell (\Gamma ), \quad \ell (\{ x \in \Gamma \ : \ u(x) > \mu \}) \le \frac{1}{2} \ell (\Gamma ). \end{aligned}$$

We denote by \(\mathrm{med}_\ell (u)\) the set of all medians of u. It is easy to see that

$$\begin{aligned}&\mu \in \mathrm{med}_\ell (u) \iff \\&- \ell (\{ u = \mu \}) \le \ell (\{ x \in \Gamma \ : \ u(x) > \mu \}) - \ell (\{ x \in \Gamma \ : \ u(x) < \mu \}) \le \ell (\{ u = \mu \}), \end{aligned}$$

from where it follows that

$$\begin{aligned} 0 \in \mathrm{med}_\ell (u) \iff \exists \xi \in \mathrm{sign}(u) \ \hbox {such that} \ \int _\Gamma \xi (x) d x = 0, \end{aligned}$$
(3.10)

By Proposition 3.15 and relation (3.10), we have the following result that was obtained for finite weighted graphs by Hein and Bühler in [27].

Corollary 3.16

If \((\lambda , u)\) is an \(\Gamma \)-eigenpair of \(\Delta ^m_1\) then

$$\begin{aligned} \lambda \not = 0\ \Longleftrightarrow \ 0 \in \mathrm{med}_\ell (u). \end{aligned}$$

Proposition 3.17

If \(\left( \lambda ^\Gamma _\Omega , \frac{1}{\ell (\Omega )}{\chi }_\Omega \right) \) is a \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\), then \(\Omega \) is \(\Gamma \)-calibrable.

Proof

By Remark 3.14 there exists \(\xi \in \mathrm{sign}({\chi }_\Omega )\) and \(\textbf{z}\in X(\Gamma )\), \(\Vert \textbf{z}\Vert _{L^\infty (\Gamma )} \le 1\) satisfying

$$\begin{aligned} \lambda ^\Gamma _\Omega \xi = -\textbf{z}^{\prime },\quad \mathcal {F}_\Gamma \left( \frac{1}{\ell (\Omega )}{\chi }_\Omega \right) = \lambda ^\Gamma _\Omega . \end{aligned}$$

Then, since \(\xi =1\) in \(\Omega \) and verifies

$$\begin{aligned} \lambda ^\Gamma _\Omega \xi = -\textbf{z}^{\prime },\quad \mathcal {F}_\Gamma ({\chi }_\Omega ) = \mathrm{Per}_\Gamma (\Omega ), \end{aligned}$$

we have

$$\begin{aligned} - \lambda ^\Gamma _\Omega \xi \in \Delta _1^\Gamma {\chi }_\Omega \, \quad \hbox {in} \ \Gamma . \end{aligned}$$

Then, by Theorem 3.10 and having in mind Remark 3.11, we get that \(\Omega \) is \(\Gamma \)-calibrable.

\(\square \)

In the next example we see that, in the above proposition, the reverse implication is false in general.

Example 3.18

Consider the metric graph \(\Gamma \) with two vertices and one edge, that is \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2 \}\) and \(E(\Gamma ) = \{ \textbf{e}:=[\mathrm {v}_1, \mathrm {v}_2] \}\), with \(\ell _{\textbf{e}} =6\). Let \(\Omega := [c_\textbf{e}^{-1}(1), c_\textbf{e}^{-1}(5)]\).

figure c

Obviously, \(\Omega \) is \(\Gamma \)-calibrable. Now, since \(0 \not \in \mathrm{med}_\ell ({\chi }_\Omega )\), by Corollary 3.16, we have \(\left( \lambda ^\Gamma _\Omega , \frac{1}{\ell (\Omega )}{\chi }_\Omega \right) \) is not a \(\Gamma \)-eigenpair of \(\Delta ^\Gamma _1\) \(\blacksquare \)

4 The Cheeger cut in metric graphs

We defined the \(\Gamma \)-Cheeger constant of \(\Gamma \) as

$$\begin{aligned} h(\Gamma ):= \inf \left\{ \frac{\mathrm{Per}_\Gamma (D)}{\min \{ \ell (D), \ell (\Gamma \setminus D)\}} \ : \ D \subset \Gamma , \ 0< \ell (D) < \ell (\Gamma ), \right\} \end{aligned}$$

or, equivalently,

$$\begin{aligned} h(\Gamma )= \inf \left\{ \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)} \ : \ D \subset \Gamma , \ 0 < \ell (D) \le \frac{1}{2} \ell (\Gamma )\right\} . \end{aligned}$$
(4.1)

A partition \((D, \Gamma \setminus D)\) of \(\Gamma \) is called a Cheeger cut of \(\Gamma \) if D is a minimizer of problem (4.1), i.e., if \(0 < \ell (D) \le \frac{1}{2} \ell (\Gamma )\) and \(h(\Gamma ) = \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)}\).

Note that if \(D \subset \Gamma \), \(0 < \ell (D) \le \frac{1}{2} \ell (\Gamma )\), we have

$$\begin{aligned} \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)} \ge \frac{1}{\frac{1}{2} \ell (\Gamma )} = \frac{2}{\ell (\Gamma )}, \end{aligned}$$

and therefore

$$\begin{aligned} h(\Gamma ) \ge \frac{2}{\ell (\Gamma )}. \end{aligned}$$

We will now give a variational characterization of the Cheeger constant which for finite weighted graphs was obtained in [46] (see also [38]). For compact Riemannian manifolds the first result of this type was obtained by Yau in [47].

Theorem 4.1

We have

$$\begin{aligned} h(\Gamma ) =\lambda _1(\Gamma ) := \inf \left\{ TV_\Gamma (u) \ : \ u \in \Pi (\Gamma ) \right\} , \end{aligned}$$
(4.2)

where

$$\begin{aligned} \Pi (\Gamma ):= \left\{ u \in BV(\Gamma ) \ : \ \Vert u \Vert _1 = 1, \ 0 \in \mathrm{med}_\ell (u) \right\} . \end{aligned}$$

Moreover, there exists a minimizer u of the problem (4.2) and also \(t \ge 0\), such that \(E_t(u)\) is a Cheeger cut of \(\Gamma \).

Proof

If \(D \subset \Gamma , \ 0 < \ell (D) \le \frac{1}{2} \ell (\Gamma )\), then \(0 \in \mathrm{med}_\ell ({\chi }_D)\). Thus,

$$\begin{aligned} \lambda _1(\Gamma ) \le TV_\Gamma \left( \frac{1}{\ell (D)} {\chi }_D \right) = \frac{1}{\ell (D)} \mathrm{Per}_\Gamma (D) \end{aligned}$$

and, therefore,

$$\begin{aligned} \lambda _1(\Gamma ) \le h(\Gamma ). \end{aligned}$$

On the other hand, by the Embedding Theorem (Theorem 2.6) and the lower semi-continuity of the total variation (Corollary 2.12), applying the Direct Method of Calculus of Variation, we have that there exists a function \(u \in L^1(\Gamma )\) such that \(\Vert u \Vert _1 = 1\) and \(0 \in \mathrm{med}_\ell (u)\), such that \(TV_\Gamma (u) = \lambda _1(\Gamma )\). Now, since \(0 \in \mathrm{med}_\ell (u)\), we have \(\ell (E_t(u)) \le \frac{1}{2} \ell (\Gamma )\) for \(t \ge 0\) and \(\ell (\Gamma \setminus E_t(u)) \le \frac{1}{2} \ell (\Gamma )\) for \(t \le 0\). Then by the Coarea formula (Theorem 2.13), the Cavalieri’s Principle and having in mind that the set \(\{ t \in \mathbb R\ : \ \ell ( \{ u = t \}) > 0 \}\) is countable, we have

$$\begin{aligned} 0\le & {} \int _0^\infty \left( \mathrm{Per}_\Gamma (E_t(u)) - h(\Gamma ) \ell (E_t(u))\right) dt + \int _{-\infty }^{0} \left( \mathrm{Per}_\Gamma (X \setminus E_t(u)) \right. \\&\quad \left. -\, h(\Gamma ) \ell (X \setminus E_t(u)) \right) \, dt \\= & {} \int _{-\infty }^{+\infty } \mathrm{Per}_\Gamma (E_t(u))\, dt - h(\Gamma ) \left( \int _0^\infty \ell (E_t(u)) dt + \int _{-\infty }^{0} \ell (X \setminus E_t(u)) dt \right) \\= & {} TV_\Gamma (u) - h(\Gamma ) \left( \int _\Gamma u^+(x) dx + \int _\Gamma u^-(x) dx \right) = TV_\Gamma (u) - h(\Gamma )\Vert u \Vert _1 \\= & {} TV_\Gamma (u) - h(\Gamma ) \le 0. \end{aligned}$$

It follows that for almost every \(t \ge 0\) (in the sense of the Lebesgue measure on \(\mathbb R\)),

$$\begin{aligned} \mathrm{Per}_\Gamma (E_t(u)) - h(\Gamma ) \ell (E_t(u)) = 0. \end{aligned}$$
(4.3)

Since \(u\not \equiv 0\), there must exist \(t \ge 0\) such that \(\ell (E_t(u)) >0\) and for which (4.3) holds. This yields at once

$$\begin{aligned} \lambda _1(\Gamma ) \le h(\Gamma ), \end{aligned}$$

as well as that \(E_t(u)\) is a Cheeger cut of \(\Gamma \). \(\square \)

Corollary 4.2

We have

$$\begin{aligned} h(\Gamma )= & {} \min \left\{ \frac{TV_\Gamma (u)}{ \Vert u - \mu \Vert _1} \ : \ u \in L^1(\Gamma ), \ \mu \in \mathrm{med}_\ell (u) \right\} \\= & {} \inf \left\{ \sup _{c \in \mathbb R} \frac{TV_\Gamma (u)}{\Vert u - c \Vert _1} \ : \ u \in L^1(\Gamma ) \right\} . \end{aligned}$$

Proof

A simple calculation shows that

$$\begin{aligned} \lambda _1(\Gamma ) = \min \left\{ \frac{TV_\Gamma (u)}{ \Vert u - \mu \Vert _1} \ : \ u \in L^1(\Gamma ), \ \mu \in \mathrm{med}_\ell (u) \right\} . \end{aligned}$$

Let

$$\begin{aligned} \alpha := \inf \left\{ \sup _{c \in \mathbb R} \frac{TV_\Gamma (u)}{\Vert u - c \Vert _1} \ : \ u \in L^1(\Gamma ) \right\} . \end{aligned}$$

Given \(u \in L^1(\Gamma )\) and \(\mu \in \mathrm{med}_\ell (u)\), we have

$$\begin{aligned} \frac{TV_\Gamma (u)}{ \Vert u - \mu \Vert _1} \le \sup _{c \in \mathbb R} \frac{TV_\Gamma (u)}{\Vert u - c \Vert _1}, \end{aligned}$$

hence

$$\begin{aligned} h(\Gamma ) = \lambda _1(\Gamma ) \le \alpha . \end{aligned}$$

To prove the another inequality, let \(D \subset \Gamma \), with \(\ell (D) \le \ell (\Gamma \setminus D)\), such that

$$\begin{aligned} h(\Gamma ) = \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)}. \end{aligned}$$

take \(v:= {\chi }_D - {\chi }_{\Gamma \setminus D}\). Then,

$$\begin{aligned} \alpha= & {} \inf \left\{ \sup _{c \in \mathbb R} \frac{TV_\Gamma (u)}{\Vert u - c \Vert _1} \ : \ u \in L^1(\Gamma ) \right\} = \inf \left\{ \max _{\vert c \vert \le 1} \frac{TV_\Gamma (u)}{\Vert u - c \Vert _1} \ : \ u \in L^1(\Gamma ) \right\} \\\le & {} \max _{\vert c \vert \le 1} \frac{TV_\Gamma (v)}{\Vert v - c \Vert _1} = \frac{2\mathrm{Per}_\Gamma (D)}{(1 - c)\ell (D)+ (1 + c) \ell (\Gamma \setminus D)} \le \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)} = h(\Gamma ). \end{aligned}$$

\(\square \)

For finite weighted graphs, it is well known that the first non–zero eigenvalue coincides with the Cheeger constant (see [13]) This result is not true for in infinite weighted graphs (see [38]). In the next result we will see that this is true in metric graphs.

Theorem 4.3

We have

$$\begin{aligned} h(\Gamma ) =\inf \{\lambda \not = 0, \ \hbox {such that} \ \lambda \ \hbox {is a} \Gamma \hbox {-eigenvalue of} -\Delta ^\Gamma _1 \}. \end{aligned}$$

Moreover, \(h(\Gamma )\) is the first non-zero \(\Gamma \)-eigenvalue of \(- \Delta ^\Gamma _1\) and if u is a minimizer of problem (4.2), then, there exists \(t \ge 0\), such that \(E_t(u)\) is a Cheeger cut of \(\Gamma \) and

$$\begin{aligned} \left( h(\Gamma ), \frac{1}{\ell (E_t(u))} {\chi }_{E_t(u)} \right) \end{aligned}$$

is a \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\).

Proof

By Corollary 3.16, if \((\lambda , u)\) is an \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\) and \(\lambda \not = 0\) then \(u \in \Pi (\Gamma )\). Now, \(TV_\Gamma (u)=\lambda \), thus, as a consequence of Theorem 4.1, we have the

$$\begin{aligned} h(\Gamma ) \le \lambda . \end{aligned}$$

On the other hand, by Theorem 4.1, there exists \(t \ge 0\), such that \(E_t(u)\) is a Cheeger cut of \(\Gamma \). Then, \(E_t(u)\) is \(\Gamma \)-calibrable. Hence, by Theorem 3.6,

$$\begin{aligned} h(\Gamma ) = h_1^\Gamma (E_t(u)) = \sup \left\{ \frac{1}{\Vert \textbf{z}\Vert _\infty } \ : \ \textbf{z}\in X_K(\Gamma ), \ \textbf{z}' = {\chi }_{E_t(u)} \right\} . \end{aligned}$$

Then, there exists a sequence \(\textbf{z}_n \in X_K(\Gamma )\) with \(\textbf{z}_n' = {\chi }_{E_t(u)}\) for all \(n \in \mathbb N\), such that

$$\begin{aligned} h(\Gamma ) = \lim _{n \rightarrow \infty } \frac{1}{\Vert \textbf{z}_n \Vert _\infty }. \end{aligned}$$

Now, since \(h(\Gamma ) >0\), we have \(\{ \Vert \textbf{z}_n \Vert _\infty \ : \ n \in \mathbb N\}\) is bounded. Thus, we can assume, taking a subsequence if necessary, that

$$\begin{aligned} \textbf{z}_n \rightarrow \textbf{z}, \quad \hbox {weakly}^* \hbox {in} \ L^\infty (\Gamma ), \quad \hbox {and} \quad \textbf{z}' = {\chi }_{E_t(u)}. \end{aligned}$$

Let us see now that \(\textbf{z}\in X_K(\Gamma )\). by Proposition 2.2, we have that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _\Gamma \textbf{z}_n Du = \int _\Gamma \textbf{z}Du, \quad \forall \, u \in BV(\Gamma ). \end{aligned}$$

Fix \(\mathrm {v}\in V(\Gamma )\). Applying Green’s formula (2.4) to \(\textbf{z}_n\) and \(u \in BV(\Gamma )\) , we get

$$\begin{aligned} \int \textbf{z}_n Du + \int _\Gamma u \textbf{z}_n^{\prime } = \sum _{\mathrm {v}\in V(\Gamma )} \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}). \end{aligned}$$

Hence, take u such that \([u]_e(\mathrm {v}) =1\) for all \(\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )\) and \([u]_{\hat{\textbf{e}}} =0\) if \(\mathrm {v}\not \in \mathrm {E}_\mathrm {v}(\Gamma )\), we have

$$\begin{aligned} \int \textbf{z}_n Du + \int _\Gamma u \textbf{z}_n^{\prime } = \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}_n]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}) =0. \end{aligned}$$

Then, taking the limit as \(n \rightarrow \infty \) and having in mind (2.2), we obtain

$$\begin{aligned} 0 = \int \textbf{z}Du + \int _\Gamma u \textbf{z}^{\prime } = \sum _{\textbf{e}\in \mathrm {E}_\mathrm {v}(\Gamma )} [\textbf{z}]_\textbf{e}(\mathrm {v}) [u]_\textbf{e}(\mathrm {v}). \end{aligned}$$

Therefore, \(\textbf{z}\in X_K(\Gamma )\).

If we take \(\tilde{\textbf{z}}:= - h(\Gamma ) \textbf{z}\), and \(v:= \frac{1}{\ell (E_t(u))} {\chi }_{E_t(u)}\), we have \(\tilde{\textbf{z}} \in X_K(\Gamma )\), \(\Vert \tilde{\textbf{z}} \Vert _\infty \le 1\) and

$$\begin{aligned} - \tilde{\textbf{z}}' = h(\Gamma ){\chi }_{E_t(u)}, \quad TV_\Gamma (v) = h(\Gamma ). \end{aligned}$$

Therefore,

$$\begin{aligned} \left( h(\Gamma ), \frac{1}{\ell (E_t(u))} {\chi }_{E_t(u)} \right) \end{aligned}$$

is a \(\Gamma \)-eigenpair of \(-\Delta ^\Gamma _1\). \(\square \)

Remark 4.4

In [19, Theorem 1.3] was proved that if we define

$$\begin{aligned} \Lambda _{2,p}(\Gamma ):= \inf \left\{ \frac{\int _\Gamma \vert u'(x) \vert ^p dx}{\int _\Gamma \vert u(x) \vert ^p dx}: u \in W^{1,p}(\Gamma ), \ \int _\Gamma \vert u(x) \vert ^{p-2}u(x) dx =0, u \not \equiv 0 \right\} , \end{aligned}$$
(4.4)

then if \(u_p\) is a minimizer of (4.4), there exists a subsequence \(p_j \rightarrow 1^+\), and \(u \in BV(\Gamma )\), such that

$$\begin{aligned} u_{p_j} \rightarrow u\quad \hbox {in} \ L^1(\Gamma ), \end{aligned}$$

being u is a minimizer of (4.2). Moreover,

$$\begin{aligned} \lim _{p \rightarrow 1^+} \Lambda _{2,p}(\Gamma ) = \Lambda _{2,1}(\Gamma ), \end{aligned}$$

where

$$\begin{aligned} \Lambda _{2,1}(\Gamma ) := \inf \left\{ \vert Du \vert (\Gamma ) \ : \ u \in \Pi (\Gamma ) \right\} . \end{aligned}$$

Let us point out that, since for \(u \in BV(\Gamma )\), in general, \(\vert Du \vert (\Gamma ) < TV_\Gamma (u)\), we have \(\Lambda _{2,1}(\Gamma ) < \lambda _1(\Gamma )\). Moreover, even more, with this definition of \(\Lambda _{2,1}(\Gamma )\), it is possible that \(\Lambda _{2,1}(\Gamma ) =0\), for instance if \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2, \mathrm {v}_3 \}\) and \(E(\Gamma ) = \{ \textbf{e}_1:=[\mathrm {v}_1, \mathrm {v}_2], \textbf{e}_2:=[\mathrm {v}_2, \mathrm {v}_3] \}\), with \(\ell _{\textbf{e}_1} =\ell _{\textbf{e}_2}\), then if \([u_i]_{\textbf{e}_i} = {\chi }_{(0, \ell _i)}\), we have \(u_i \in \Pi (\Gamma )\) and \( \vert Du_i \vert (\Gamma ) =0\). \(\blacksquare \)

Let \(A \subset \Gamma \) with \( \mathrm{Per}_\Gamma (A) >0\), \(\ell (A) = \frac{1}{2} \ell (\Gamma )\), and \(u= \frac{1}{\ell (\Gamma )} \left( {\chi }_A - {\chi }_{\Gamma \setminus A} \right) \). It is easy to see that \(TV_\Gamma (u)=\frac{2}{\ell (\Gamma )}\mathrm{Per}_\Gamma (A) = \frac{\mathrm{Per}_\Gamma (A)}{\ell (A)} >0\). Hence, since \(\Vert u\Vert _1=1\) and \(0 \in \mathrm{med}_\nu (u)\), we obtain the following result as a consequence of Theorem 4.1.

Corollary 4.5

Let \(A \subset \Gamma \) with \(\ell (A) = \frac{1}{2} \ell (\Gamma )\) and \(u= \frac{1}{\ell (\Gamma )} \left( {\chi }_A - {\chi }_{\Gamma \setminus A} \right) \). Then,

\(h(\Gamma ) = \frac{\mathrm{Per}_\Gamma (A)}{\ell (A)} \iff u= \frac{1}{\ell (\Gamma )} \left( {\chi }_A - {\chi }_{\Gamma \setminus A} \right) \ \hbox { is a minimizer of }\) (4.2).

A similar result was proved in [19, Theorem 1.4], but we have observed in Remark 4.4 that their concept of perimeter is different to the one we used here.

In the next example we will see that there are Cheeger cup E such that \(\ell (E) < \frac{1}{2}\ell (\Gamma )\).

Example 4.6

Consider the metric graph \(\Gamma \) with four vertices and three edges, \(V(\Gamma ) = \{\mathrm {v}_1, \mathrm {v}_2, \mathrm {v}_3, \mathrm {v}_4 \}\) and \(E(\Gamma ) = \{ \textbf{e}_1:=[\mathrm {v}_1, \mathrm {v}_2], \textbf{e}_2:=[\mathrm {v}_2, \mathrm {v}_3], \textbf{e}_3:=[\mathrm {v}_2, \mathrm {v}_4] \}\).

figure d

If we assume that \(\ell _{\textbf{e}_i} = L\) for \(i=1,2,3\), Then, each \(\textbf{e}_i\) is a Cheeger cup of \(\Gamma \). In fact, if \(D \subset \Gamma \) has \(\mathrm{Per}_\Gamma (D) =1\), then \(D \subset \textbf{e}_i\). Now, if \(D \not = \textbf{e}_i\), then

$$\begin{aligned} \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)} > \frac{\mathrm{Per}_\Gamma (\textbf{e}_i)}{\ell (\textbf{e}_i)} = \frac{1}{L}. \end{aligned}$$

Moreover, if \(D \subset \Gamma \) and \(L < \ell (D) \le \frac{3L}{2}\), then \(\mathrm{Per}_\Gamma (D) \ge 2\). Hence

$$\begin{aligned} \frac{\mathrm{Per}_\Gamma (D)}{\ell (D)} \ge \frac{2}{\frac{3L}{2}} = \frac{4}{3L} >\frac{1}{L}. \end{aligned}$$

Thus

$$\begin{aligned} h(\Gamma ) = \frac{\mathrm{Per}_\Gamma (\textbf{e}_i)}{\ell (\textbf{e}_i)} = \frac{1}{L}, \end{aligned}$$

and consequently, each \(\textbf{e}_i\) is a Cheeger cut of \(\Gamma \).

Moreover, \((h(\Gamma ), \frac{1}{L} {\chi }_{\textbf{e}_i})\) is a \(\Gamma \)-eigenpair of \(- \Delta ^\Gamma _1\). For instance, for \(\textbf{e}_1\), if we define \(\textbf{z}\) as

$$\begin{aligned}{}[\textbf{z}]_{\textbf{e}_1}(x):= - \frac{1}{L}x, \quad [\textbf{z}]_{\textbf{e}_i}(x):= \frac{2}{L}x -2, \ i =2,3, \ x \in (0,L), \end{aligned}$$

then,

$$\begin{aligned} - \textbf{z}' = h(\Gamma ) {\chi }_{\textbf{e}_1}\quad \hbox {in} \ \mathcal {D}(\Gamma ), \quad \hbox {and} \quad TV_\Gamma (\frac{1}{L} {\chi }_{\textbf{e}_i}) = h(\Gamma ). \end{aligned}$$

Therefore \((h(\Gamma ), \frac{1}{L} {\chi }_{\textbf{e}_i})\) is an \(\Gamma \)-eigenpair of \(- \Delta ^\Gamma _1\).

If we assume now that \(\ell _{\textbf{e}_1} > \ell _{\textbf{e}_2} + \ell _{\textbf{e}_3}\), then it is easy to see that

$$\begin{aligned} h(\Gamma ) = \frac{2}{\ell _{\textbf{e}_1} + \ell _{\textbf{e}_2} +\ell _{\textbf{e}_3}}, \quad \hbox {and} \quad \left( \mathrm {v}_1, c_{\textbf{e}_1}^{-1}\left( \frac{\ell _{\textbf{e}_1} + \ell _{\textbf{e}_2} +\ell _{\textbf{e}_3}}{2}\right) \right) \ \hbox {is a Cheeger cut of} \Gamma . \end{aligned}$$

\(\blacksquare \)

Now we are going to get a Cheeger inequality of type (1.1) for metric graphs. For that let us introduce the Laplace operator \(\Delta _\Gamma \) on the metric graph \(\Gamma \). This is a standard procedure and we refer the interested reader to [7, 11]. For a function \(u \in W^{1,1}(\Gamma )\), if \(\textbf{e}\in E(\Gamma )\) and \(\mathrm {v}\in V(\Gamma )\), we define the normal derivative of u at \(\mathrm {v}\) as

$$\begin{aligned} \frac{\partial [u]_\textbf{e}}{\partial n_\textbf{e}}(\mathrm {v}) := \left\{ \begin{array}{ll} - [u]_\textbf{e}(0+), \quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {i}_\textbf{e}\\ {[}u]_\textbf{e}(\ell -), \quad &{}\hbox {if} \ \ \mathrm {v}= \mathrm {f}_\textbf{e}. \end{array} \right. \end{aligned}$$

The Sobolev space \(W^{2,2}(\Gamma )\) is defined as the space of functions u on \(\Gamma \) such that \([u]_{\textbf{e}}\in W^{2,2}(0,\ell _{\textbf{e}})\) for all \(\textbf{e}\in \mathrm {E}(\Gamma )\). The operator \(\Delta _\Gamma \) has domain

$$\begin{aligned} D(\Delta _\Gamma ):= \left\{ u \in W^{2,2}(\Gamma ) : u \ \hbox {continuous and } \ \sum _{\textbf{e}\in E_\mathrm {v}(\Gamma )} \frac{\partial [u]_\textbf{e}}{\partial n_\textbf{e}}(\mathrm {v}) =0 \ \ \hbox {for all } \ \mathrm {v}\in V(\Gamma ) \right\} \end{aligned}$$

and it applies to any function \(u \in D(\Delta _\Gamma )\) as follows

$$\begin{aligned}{}[\Delta _\Gamma u]_\textbf{e}:= ([u]_\textbf{e})_{xx}, \quad \hbox {for all} \ \ \textbf{e}\in E(\Gamma ). \end{aligned}$$

The energy functional form associated to \(\Delta _\Gamma \) is given by

$$\begin{aligned} \mathcal {H}_\Gamma (u):= \int _\Gamma (u'(x))^2 dx = \sum _{\textbf{e}\in E(\Gamma )} \Vert [u]_\textbf{e}' \Vert ^2_{L^2(0, \ell _\textbf{e})}. \end{aligned}$$

We have

$$\begin{aligned} \mathcal {H}_\Gamma (u) = - \int _\Gamma u(x)\Delta _\Gamma u (x) dx, \quad \hbox {for} \ u \in D(\Delta _\Gamma ). \end{aligned}$$

The operator \(- \Delta _\Gamma \) is selfadjoint in \(L^2(\Gamma )\) and

$$\begin{aligned} \sigma ( -\Delta _\Gamma ) \setminus \{0 \} = \{\mu _1(\Gamma ), \mu _2(\Gamma ), \ldots \}, \end{aligned}$$

being

$$\begin{aligned} \mu _1(\Gamma ) = \mathrm{gap}(- \Delta _\Gamma )= \min \left\{ \frac{\mathcal {H}_\Gamma (u)}{\Vert u \Vert ^2_{L^2(\Gamma )}} \ : \ u \in D(\mathcal {H}_\Gamma ), \Vert u \Vert _{L^2(\Gamma )} \not =0, \ \int _\Gamma u(x) dx =0 \right\} . \end{aligned}$$

Theorem 4.7

We have the following Cheeger Inequality:

$$\begin{aligned} \frac{1}{4} h(\Gamma )^2 \le \mathrm{gap}(- \Delta _\Gamma ). \end{aligned}$$
(4.5)

Proof

Let \(u \in D(\mathcal {H}_\Gamma )\), with \(\int _\Gamma u(x) dx =0\), such that

$$\begin{aligned} \frac{\mathcal {H}_\Gamma (u)}{\Vert u \Vert ^2_{L^2(\Gamma )}} = \mathrm{gap}(- \Delta _\Gamma ). \end{aligned}$$

If \(\alpha \in \mathrm{med}_{\ell }(u)\) and \(v:= u - \alpha \), we have \(0 \in \mathrm{med}_{\ell }(v^2)\). Then, by Theorem 4.1, we have

$$\begin{aligned} h(\Gamma ) \le \frac{\displaystyle \int _\Gamma (v^2)'(x) dx }{\displaystyle \int _\Gamma v^2(x) dx}. \end{aligned}$$

Now, since \(\int _\Gamma u(x) dx =0\), we have

$$\begin{aligned} \int _\Gamma v^2(x) dx \ge \int _\Gamma u^2(x) dx. \end{aligned}$$

On the other hand, by Cauchy- Schwartz

$$\begin{aligned} \int _\Gamma (v^2)'(x) dx = 2 \int _\Gamma v(x) v'(x) dx \le 2 \left( \int _\Gamma v^2(x)dx \right) ^{\frac{1}{2}} \left( \int _\Gamma (v')^2(x)dx\right) ^{\frac{1}{2}}. \end{aligned}$$

Thus

$$\begin{aligned} h(\Gamma )^2\le & {} \frac{4 \left( \displaystyle \int _\Gamma v^2(x)dx \right) \left( \displaystyle \int _\Gamma (v')^2(x)dx\right) }{\left( \displaystyle \int _\Gamma v^2(x)dx \right) ^2} \\= & {} \frac{4 \displaystyle \int _\Gamma (u')^2(x)dx}{\displaystyle \int _\Gamma v^2(x)dx } \le \frac{4\mathcal {H}_\Gamma (u)}{\Vert u \Vert ^2_{L^2(\Gamma )}} = 4 \mathrm{gap}(- \Delta _\Gamma ), \end{aligned}$$

and therefore (4.5) holds. \(\square \)

Let us point out that the Cheeger Inequality (4.5) was also prove by Nicaise [41] (see also [30, 43]) but with a different proof and for a different concept of perimeter.