1 Introduction

1.1 Historical Note

Cheeger constants and Cheeger inequalities have a long history. The now-called Cheeger constant of a simple graph \(G=(V,E)\) was introduced in 1951 by Pólya and Szegő [23], who called it the isoperimetric constant and defined it as

$$\begin{aligned} h(G):=\min _{\emptyset \ne S\subsetneq V}\frac{|E(S,\bar{S})|}{\min \{{{\,\mathrm{vol}\,}}(S),{{\,\mathrm{vol}\,}}(\bar{S})\}}, \end{aligned}$$

where \(E(S,\bar{S})\) denotes the set of edges between S and its complement \(\bar{S}:=V\setminus S\), while the volume of S, denoted \({{\,\mathrm{vol}\,}}(S)\), is the sum of the vertex degrees in S. Finding a set S realizing the Cheeger constant means finding a small edge cut \(E(S,\bar{S})\) such that, if removed from G, it divides the graph into two disconnected components that have roughly equal volume (Fig. 1). Therefore, h measures how different G is from a disconnected graph, and it is largest for the complete graph.

Fig. 1
figure 1

The Cheeger cut on a graph

The continuous analogue of h(G) was then defined by Cheeger [10] in 1970, in the context of spectral geometry, as follows. Given a compact n-dimensional manifold M, let

$$\begin{aligned} h(M):=\inf _D\frac{{{\,\mathrm{vol}\,}}_{n-1}(\delta D)}{{{\,\mathrm{vol}\,}}_n(D)}, \end{aligned}$$

where \(D\subset M\) is a smooth n-submanifold with boundary \(\delta D\) and \(0<{{\,\mathrm{vol}\,}}_n(D)\le {{\,\mathrm{vol}\,}}(M)/2\). Cheeger proved that the first nonvanishing eigenvalue \(\lambda _{\min }(M)\) of the Laplace-Beltrami operator is such that

$$\begin{aligned} \lambda _{\min }(M)\ge \frac{1}{4}h^2(M) \end{aligned}$$

and, as shown by Buser [5] in 1978, for each compact manifold there exist Riemannian metrics for which the inequality becomes sharp. In a later work in 1982, Buser [6] also proved that, if the Ricci curvature of a compact unbordered Riemannian n-manifold M is bounded below by \(-(n-1)a^2\), for some \(a\ge 0\), then

$$\begin{aligned} \lambda _{\min }(M)\le 2a(n-1)h+10h^2. \end{aligned}$$

Therefore, h(M) can be used to estimate \(\lambda _{\min }(M)\) and vice versa.

In 1984–1985, Dodziuk [12] and Alon and Milman [1] derived analogous estimates for the graph Cheeger constant and for the first nonvanishing eigenvalue of the Kirchhoff Laplacian associated to a connected graph. Similarly, in 1992, Chung [11] proved the Cheeger inequalities for the symmetric normalized Laplacian of a graph G on n nodes, that she defined as

$$\begin{aligned} \mathcal {L}(G):={{\,\mathrm{Id}\,}}-D(G)^{-1/2}A(G)D(G)^{-1/2}, \end{aligned}$$

where \({{\,\mathrm{Id}\,}}\) is the \(n\times n\) identity matrix, D(G) is the diagonal degree matrix and A(G) is the adjacency matrix of G. Chung proved that \(\mathcal {L}(G)\) has n real, nonnegative eigenvalues, denoted \(\lambda _1(G)\le \cdots \le \lambda _n(G)\), that encode many qualitative properties of G. In particular, she proved that, for a connected graph, the first two eigenvalues are such that \(\lambda _1(G)=0\) and

$$\begin{aligned} \frac{1}{2}h(G)^2\le \lambda _2(G)\le 2h(G). \end{aligned}$$
(1)

Therefore, as well as in the continuous case, h(G) can be used to estimate \(\lambda _2(G)\) and vice versa. Moreover, the eigenvectors corresponding to \(\lambda _2(G)\) can be used in order to approximate the Cheeger cut, as follows. An eigenvector for \(\mathcal {L}(G)\) can be seen as a function \(f:V\rightarrow \mathbb {R}\) and, if f is an eigenfunction with eigenvalue \(\lambda _2(G)\), then f must achieve both positive and negative values, and the edges between the sets

$$\begin{aligned} \{v\in V:f(v)\ge 0\} \quad \text {and} \quad \{v\in V:f(v)<0\} \end{aligned}$$

approximate the Cheeger cut. Since solving the Cheeger cut problem is NP-hard [26], while the eigenvalues and the eigenvectors of \(\mathcal {L}(G)\) can be found quickly, spectral clustering based on these results is a very common tool and have found many applications, see for instance [7, 9, 19, 25]. Citing [18] “In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm”.

Note that the normalized Laplacian or random walk Laplacian

$$\begin{aligned} L(G):={{\,\mathrm{Id}\,}}-D(G)^{-1}A(G)=D(G)^{-1/2}\mathcal {L}(G)D(G)^{1/2} \end{aligned}$$

is similar to \(\mathcal {L}(G)\), therefore these two matrices have the same spectrum. Moreover, f is an eigenfunction for \(\mathcal {L}(G)\) with eigenvalue \(\lambda \) if and only if \(D^{1/2}f\) is an eigenfunction for L(G) with eigenvalue \(\lambda \). Hence, the above statements for \(\mathcal {L}(G)\) can be equivalently stated for L(G), on which we will focus throughout this paper.

1.2 Aim of this Work

The aim of this work is to generalize the graph Cheeger inequalities and Cheeger cut to the case of uniform hypergraphs. Hypergraphs are a generalization of graphs in which vertices are joined by sets of any cardinality, and a hypergraph is said to be k-uniform if all its edges have cardinality k. Hypergraphs find applications in many real networks (e.g. cellular networks [15], social networks, [27], neural networks [21], opinion formation [16], epidemic networks [4]) and a hypergraph Cheeger cut could be applied to clustering problems on such networks.

The fundamental idea used here is the following. Given a connected simple graph G, its signless normalized Laplacian is

$$\begin{aligned} L^+(G):={{\,\mathrm{Id}\,}}+D(G)^{-1}A(G)=2{{\,\mathrm{Id}\,}}-L(G). \end{aligned}$$

It is such that

$$\begin{aligned} \lambda \text { is an eigenvalue for }L(G) \iff 2-\lambda \text { is an eigenvalue for }L^+(G) \end{aligned}$$

with the same eigenfunctions and, moreover, \(L^+(G)=L(G^+)\), where \(G^+\) is the signed graph obtained from G by letting each edge have a positive sign. Since, furthermore, \(h(G)=h(G^+)\), the Cheeger inequalities in (1) can be equivalently reformulated in terms of the second largest eigenvalue of \(L(G^+)\), as

$$\begin{aligned} \frac{1}{2}h(G^+)^2\le 2-\lambda _{n-1}(G^+)\le 2h(G^+). \end{aligned}$$
(2)

Also, the Cheeger cut can be approximated based on the sign of a given eigenfunction of \(\lambda _{n-1}(G^+)\). We shall use this equivalent formulation of the Cheeger inequalities in order to prove a generalization for uniform hypergraphs.

In particular, given a connected, k-uniform hypergraph \(\varGamma \), we will see it as an oriented hypergraph [24] with only positive signs and we will consider the corresponding hypergraph normalized Laplacian \(L(\varGamma )\) defined in [14]. We will define a generalized Cheeger constant \(h(\varGamma )\) for \(\varGamma \) that coincides with the classical one in the particular case of graphs and we will prove, in Theorem 1 below, that

$$\begin{aligned} \frac{1}{2(k-1)} h(\varGamma )^2\le k-\lambda _{n-1}(\varGamma )\le 2(k-1)h(\varGamma ). \end{aligned}$$

Clearly, the above inequalities generalize (2), therefore (1). Moreover, the proof will suggest that the eigenfunctions of \(\lambda _{n-1}(\varGamma )\) can be used to approximate the Cheeger cut.

1.3 Related Work

It is worth mentioning some related work that is present in literature. In [22], some Cheeger-like inequalities are shown for the smallest nonzero eigenvalue of \(L(\varGamma )\), for restricted classes of hypergraphs which satisfy either only a generalized Cheeger upper bound or only a generalized Cheeger lower bound. In [3, 8, 13, 17], Cheeger-type inequalities are shown for other operators on hypergraphs.

1.4 Structure of the Paper

In Sect. 2 we give the preliminary definitions and in Sect. 3 we present the main results. In Sect. 4 we prove the upper Cheeger inequality and in Sect. 5 we prove the lower bound.

2 Preliminary Definitions

Definition 1

[24] An oriented hypergraph is a triple \(\varGamma =(V,E,\psi _\varGamma )\) such that V is a finite set of vertices, E is a finite multiset of elements \(e\in \mathcal {P}(V)\setminus \{\emptyset \}\) called edges, while \(\psi _\varGamma :(V,E)\rightarrow \{-1,0,+1\}\) is the incidence function and it is such that

$$\begin{aligned} \psi _\varGamma (v,e)\ne 0 \iff v\in e. \end{aligned}$$

Two vertices \(i\ne j\) are co-oriented in e if \(\psi _\varGamma (v,e)=\psi _\varGamma (w,e)\ne 0\) and they are anti-oriented in e if \(\psi _\varGamma (v,e)=-\psi _\varGamma (w,e)\ne 0\).

We fix, from here on, an oriented hypergraph \(\varGamma =(V,E,\psi _\varGamma )\) on n vertices \(v_1,\ldots ,v_n\).

Definition 2

The degree of a vertex v, denoted \(\text{deg} (v)\), is the number of edges containing v. The cardinality of a edge e, denoted |e|, is the number of vertices that are contained in e. \(\varGamma \) is d-regular if \(\text{deg} (v)=d\) is constant for all \(v\in V\); it is k-uniform if \(|e|=k\) is constant for all \(e\in E\).

Remark 1

Signed graphs can be seen as 2-uniform oriented hypergraphs such that E is a set. Simple graphs can be seen as signed graphs such that, for each \(e\in E\), there exists a unique \(v\in V\) with \(\psi _\varGamma (v,e)=1\) and there exists a unique \(w\in V\) with \(\psi _\varGamma (w,e)=-1\). Classical hypergraphs (the ones we are going to consider) can be seen as oriented hypergraphs such that

$$\begin{aligned} \psi _\varGamma (v,e)= 1 \iff v\in e. \end{aligned}$$

Definition 3

\(\varGamma \) is connected if, for every pair of vertices \(v,w\in V\), there exists a path that connects v and w, i.e., there exist \(w_1,\cdots ,w_m\in V\) and \(e_1,\dots ,e_{m-1}\in E\) such that:

  • \(w_1=v\);

  • \(w_m=w\);

  • \(\{w_i,w_{i+1}\}\subseteq e_i\) for each \(i=1,\dots ,m-1\).

For simplicity, we shall assume that \(\varGamma \) is connected and has no vertices of degree zero. These assumptions are not restrictive, since the spectrum of a hypergraph is given by the union of the spectra of its connected components [20], while each vertex of degree zero simply produces 0 as eigenvalue [11]. We also assume that, for all \(v\in V\),

$$\begin{aligned} \text{deg} (v)\le \sum _{w\ne v} \text{deg} (w). \end{aligned}$$
(3)

This is always true in the case of graphs and we will need this assumption in the proof of the main theorem.

Definition 4

[14] The degree matrix of \(\varGamma \) is the \(n\times n\) diagonal matrix

$$\begin{aligned} D=D(\varGamma ):={{\,\mathrm{diag}\,}}\bigl ( \text{deg} (v_1),\ldots , \text{deg} (v_n)\bigr ). \end{aligned}$$

The adjacency matrix of \(\varGamma \) is the \(n\times n\) matrix \(A=A(\varGamma ):=(A_{ij})_{ij},\) where \(A_{ii}:=0\) for each \(i=1,\ldots ,n\) and, for \(i\ne j\),

$$\begin{aligned} A_{ij}:=&\biggl |\{\text {edges in which }v_i \text { and }v_j\text { are anti-oriented}\}\biggr |+\\&-\biggl |\{\text {edges in which }v_i \text { and }v_j\text { are co-oriented}\}\biggr |. \end{aligned}$$

The normalized Laplacian of \(\varGamma \) is the \(n\times n\) matrix

$$\begin{aligned} L=L(\varGamma ):={{\,\mathrm{Id}\,}}-D^{-1}A. \end{aligned}$$

Remark 2

If \(\varGamma \) is a simple graph, the adjacency matrix has (0, 1)-entries while, if \(\varGamma \) is a classical hypergraph (seen as an oriented hypergraph such that the incidence function has values in \(\{0,+1\}\)), then the adjacency matrix has nonpositive entries.

From here on we shall assume that \(\varGamma \) is a k-uniform, classical hypergraph, seen as an oriented hypergraph such that the incidence function has values in \(\{0,+1\}\).

As shown in [14], L has n real, nonnegative eigenvalues, counted with multiplicity. We denote them as

$$\begin{aligned} \lambda _1\le \cdots \le \lambda _n. \end{aligned}$$

Moreover, as shown in [20], since \(\varGamma \) is connected and k-uniform, \(\lambda _n=k\) and the constant functions are the corresponding eigenfunctions. By the Courant-Fischer-Weyl min-max principle (cf. [14]), the second largest eigenvalue of L can be characterized in terms of the Rayleigh quotient of a nonzero function \(f:V\rightarrow \mathbb {R}\),

$$\begin{aligned} {{\,\mathrm{RQ}\,}}(f):=\frac{\sum _{e\in E}\left( \sum _{v\in e}f(v)\right) ^2}{\sum _{v\in V} \text{deg} (v)f(v)^2}. \end{aligned}$$

In particular,

$$\begin{aligned} \lambda _{n-1}=\max _{f\perp \mathbf {1}}{{\,\mathrm{RQ}\,}}(f), \end{aligned}$$
(4)

where the condition \(f\perp \mathbf {1}\) denotes the orthogonality to the constants,

$$\begin{aligned} \sum _{v\in V} \text{deg} (v)f(v)=0, \end{aligned}$$

derived from the fact that the eigenfunctions corresponding to \(\lambda _n\) are the constant functions (cf. [14]).

We now introduce the generalized Cheeger constant that will be used for bounding \(k-\lambda _{n-1}\).

Definition 5

Given \(S\subseteq V\), we let \(\bar{S}:=V\setminus S\), \({{\,\mathrm{vol}\,}}(S):=\sum _{v\in S} \text{deg} (v)\) and

$$\begin{aligned} E_r(S):=\{e\in E: |e\cap S|=r\}, \end{aligned}$$

for \(r\in \{1,\ldots ,k\}\).

Remark 3

Clearly, for each \(r\in \{1,\ldots ,r\}\), \(E_r(S)=E_{k-r}(\bar{S})\). Moreover,

$$\begin{aligned} E_k(S)=\{e\in E:e\subseteq S\}, \end{aligned}$$
$$\begin{aligned} E_0(S)=\{e\in E:e\subseteq \bar{S}\} \end{aligned}$$

and

$$\begin{aligned} {{\,\mathrm{vol}\,}}(S)=\sum _{v\in S} \text{deg} (v)=\sum _{r=1}^{k}r |E_r(S)|. \end{aligned}$$

Definition 6

Given \(\emptyset \ne S\subsetneq V\),

$$\begin{aligned} h(S):=\frac{\sum _{r=1}^{k-1}|E_r(S)|r(k-r)}{\min \{{{\,\mathrm{vol}\,}}(S),{{\,\mathrm{vol}\,}}(\bar{S})\}}. \end{aligned}$$

The Cheeger constant of \(\varGamma \) is

$$\begin{aligned} h:=\min _{\emptyset \ne S\subsetneq V}h(S). \end{aligned}$$

Remark 4

Observe that the quantity

$$\begin{aligned} \sum _{r=1}^{k-1}|E_r(S)|r(k-r) \end{aligned}$$

appearing in the numerator of h(S) counts the number of pairwise connections between S and \(\bar{S}\). Furthermore, if \(\varGamma \) is a graph, then \(k=2\), \(E_1(S)\) is the set of edges between S and \(\bar{S}\), and the Cheeger constant defined above coincides with the one introduced by Pólya and Szegő.

3 Main Results

3.1 Cheeger Inequalities

Our main result is the following theorem.

Theorem 1

Let \(\varGamma \) be a connected, k-uniform hypergraph. Then,

$$\begin{aligned} \frac{1}{2(k-1)} h^2\le k-\lambda _{n-1}\le 2(k-1)h. \end{aligned}$$

Remark 5

Theorem 1 generalizes (2) which is, on its turn, equivalent to the classical Cheeger inequalities in (1).

We split the proof of Theorem 1 into two parts: in Sect. 4 we prove the upper bound and in Sect. 5 we prove the lower bound. Our proofs are inspired by the graph case and in particular by the proof method in [11, Lemma 2.1] for the upper bound; by the proof method in [11, Theorem 2.2] for the lower bound. However, the proofs presented here for hypergraphs are much longer and more complicated than those for graphs. Both proofs make use of the fact that the eigenfunctions corresponding to \(\lambda _{n-1}\) are orthogonal to the constants and, as we already observed, this is a consequence of the fact that \(\varGamma \) is uniform.

3.2 Cheeger Cut

The proof of Theorem 1 will also suggest that, as in the graph case, the Cheeger cut of \(\varGamma \) can be approximated by the sets

$$\begin{aligned} \{v\in V:f(v)\ge 0\} \quad \text {and} \quad \{v\in V:f(v)<0\}, \end{aligned}$$

for a given eigenfunction \(f:V\rightarrow \mathbb {R}\) of \(\lambda _{n-1}\). This gives a generalized method of spectral clustering for uniform hypergraphs.

Example 1

Let \(\varGamma \) be the hypergraph in Fig. 2, with vertex set \(V=\{v_1,\ldots ,v_6\}\) and edge set \(E=\{e_1,e_2,e_3\}\) such that:

  • \(e_1=\{v_1,v_2,v_3\}\);

  • \(e_2=\{v_3,v_4,v_5\}\);

  • \(e_3=\{v_4,v_5,v_6\}\).

Then, \( D=\text {diag}(1,1,2,2,2,1)\),

$$\begin{aligned} A=- \begin{pmatrix} \begin{matrix} 0 &{} 1 &{} 1 &{} 0&{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 0&{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 1&{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0&{} 2 &{} 1 \\ 0 &{} 0 &{} 1 &{} 2 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1&{} 1 &{} 0 \end{matrix} \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} L={{\,\mathrm{Id}\,}}-D^{-1}A= \begin{pmatrix} \begin{matrix} 1 &{} 1 &{} 1 &{} 0&{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0&{} 0 &{} 0 \\ 0.5 &{}0.5 &{} 1 &{}0.5&{}0.5 &{} 0 \\ 0 &{} 0 &{}0.5 &{} 1&{} 1 &{}0.5 \\ 0 &{} 0 &{}0.5 &{} 1 &{} 1 &{}0.5 \\ 0 &{} 0 &{} 0 &{} 1&{} 1 &{} 1 \end{matrix} \end{pmatrix}. \end{aligned}$$

One can check that \(\lambda _{n-1}=\frac{3+\sqrt{3}}{2}\) and a corresponding eigenfunction is

$$\begin{aligned} f= \biggl (-\frac{1+\sqrt{3}}{2},-\frac{1+\sqrt{3}}{2},-\frac{1}{2},\frac{1+\sqrt{3}}{4},\frac{1+\sqrt{3}}{4},1\biggr ). \end{aligned}$$

Using f for approximating the Cheeger cut gives

$$\begin{aligned} \{v_1,v_2,v_3\}\quad \text {and}\quad \{v_4,v_5,v_6\}, \end{aligned}$$

as one would expect.

Fig. 2
figure 2

The hypergraph in Example 1

3.3 Key Idea

As we argued in Sect. 1, the key idea used in this paper is to first reformulate the graph Cheeger inequalities in terms of the signless normalized Laplacian and then generalize them for the second largest eigenvalue of the hypergraph normalized Laplacian. The first step is fundamental. In [22], for instance, an attempt to formulate generalized Cheeger inequalities in terms of the first nonzero eigenvalue of the hypergraph normalized Laplacian was made, but it led to generalizations for restricted classes of hypergraphs, either only for the Cheeger upper bound or only for the Cheeger lower bound. The reason is that the properties of the smallest eigenvalues of the graph Laplacian are preserved, in the general case, by the largest eigenvalues of the Laplacian.

This change of point of view can allow us also to generalize, to the case of uniform hypergraphs, the fact that the multiplicity of 0 for L counts the number of connected components of a simple graph [11]. In terms of the signless Laplacian, this is equivalent to saying that the multiplicity of 2 for \(L^+\) counts the number of connected components of a simple graph and, on its turn, this is equivalent to saying that the multiplicity of 2 of L equals the number of connected components in the case of a signed graph in which each edge has a positive sign. While this property cannot be generalized for hypergraphs in terms of the multiplicity of 0 (cf. [14]), it can be generalized in terms of the multiplicity of \(\lambda _n\), as follows.

Theorem 2

If \(\varGamma \) is a k-uniform hypergraph, then the multiplicity of k equals the number of connected components of \(\varGamma \).

Proof

It follows from the fact that, as shown in [20], a connected k-uniform hypergraph has eigenvalue \(\lambda _n=k\) and the corresponding eigenfunctions are exactly the constant functions. \(\square \)

3.4 Vertex Cut for Regular Hypergraphs

Given a hypergraph \(\varGamma =(V,E)\) on n nodes \(v_1,\ldots ,v_n\) and m edges \(e_1,\ldots ,e_m\), its dual hypergraph is \(\varGamma ^*:=(V^*,E^*)\), where:

  • \(V^*:=\{v_1^*,\ldots ,v_m^*\}\);

  • \(E^*:=\{e_1^*,\ldots ,e_n^*\}\);

  • \(v_j^*\in e_j^*\) in \(\varGamma ^*\) if and only if \(v_i\in e_j\) in \(\varGamma \).

Therefore, the vertices of \(\varGamma \) correspond to the edges of \(\varGamma ^*\) and vice versa. In particular, if \(\varGamma \) is d-regular, then \(\varGamma ^*\) is d-uniform. In this case, we can apply Theorem 1 to \(\varGamma ^*\) and the edge cut on \(\varGamma ^*\) can be translated into a vertex cut on \(\varGamma \), as follows.

Definition 7

Let \(\varGamma =(V,E)\) be a d-regular hypergraph. Given \(\emptyset \ne F\subsetneq E\), let \(\bar{F}:=E\setminus F\), \({{\,\mathrm{vol}\,}}(F):=\sum _{e\in F}|e|\) and

$$\begin{aligned} V_r(F):=\{v\in V: v \text { belongs to }r\text { edges in }F\}, \end{aligned}$$

for \(r\in \{1,\ldots ,d\}\). Let also

$$\begin{aligned} h_*(F):=\frac{\sum _{r=1}^{d-1}|V_r(F)|r(d-r)}{\min \{{{\,\mathrm{vol}\,}}(F),{{\,\mathrm{vol}\,}}(\bar{F})\}}. \end{aligned}$$

The vertex Cheeger constant of \(\varGamma \) is

$$\begin{aligned} h_*:=\min _{\emptyset \ne F\subsetneq E}h_*(F). \end{aligned}$$

Corollary 1

Let \(\varGamma \) be a connected, d-regular hypergraph. Then,

$$\begin{aligned} \frac{1}{2(d-1)} h_*^2\le d-\lambda _{m-1}(\varGamma ^*)\le 2(d-1)h_*. \end{aligned}$$

Proof

It follows from Theorem 1 applied to \(\varGamma ^*\).

In particular, using the signs of an eigenfunction of \(\lambda _{m-1}(\varGamma ^*)\), one can give an edge cut for \(\varGamma ^*\) corresponding to a vertex cut for \(\varGamma \).

3.5 Bipartite Uniform Hypergraphs

For future directions, it will be interesting to see whether the results presented here could be extended to classical hypergraphs that are not necessarily uniform and, more generally, to oriented hypergraphs. We can already say something for bipartite hypergraphs: oriented hypergraphs whose vertex set can be partitioned into two disjoint subsets as \(V=V_1\sqcup V_2\), such that each edge contain all its positive incidences in \(V_1\) and all its negative incidences in \(V_2\), or vice versa. Bipartite hypergraphs generalize bipartite graphs and, as shown in [2], a bipartite hypergraph \(\varGamma =(V,E,\psi _\varGamma )\) has the same spectrum as \(\varGamma ^+:=(V,E,\psi _{\varGamma ^+})\), where \(\psi ^+\) is such that

$$\begin{aligned} \psi _{\varGamma ^+}(v,e)= 1 \iff v\in e. \end{aligned}$$

This implies that the Cheeger inequalities in Theorem 1 also hold for bipartite k-uniform hypergraphs. However, since the eigenfunctions of \(\varGamma \) and \(\varGamma ^+\) differ by changes of signs, in this case we cannot approximate the Cheeger cut by

$$\begin{aligned} \{v\in V:f(v)\ge 0\} \quad \text {and} \quad \{v\in V:f(v)<0\}, \end{aligned}$$

for a given eigenfunction \(f:V\rightarrow \mathbb {R}\) of \(\lambda _{n-1}\).

4 Proof of the Upper Bound

Theorem 3

Let \(\varGamma \) be a connected, k-uniform hypergraph. Then,

$$\begin{aligned} k-\lambda _{n-1}\le 2(k-1)h. \end{aligned}$$

Proof

Let \(\emptyset \ne S\subsetneq V\) be such that \(h=h(S)=h(\bar{S})\), and assume, without loss of generality, that \({{\,\mathrm{vol}\,}}(S)\le {{\,\mathrm{vol}\,}}(\bar{S})\). Let

$$\begin{aligned} \alpha :=\frac{{{\,\mathrm{vol}\,}}(S)}{{{\,\mathrm{vol}\,}}(\bar{S})}\le 1 \end{aligned}$$

and let f be a function on V defined by

$$\begin{aligned} f(v):={\left\{ \begin{array}{ll} 1 &{}\text { if }v\in S\\ -\alpha &{}\text { if }v\in \bar{S}. \end{array}\right. } \end{aligned}$$

By construction of f, \(\sum _{v\in V} \text{deg} (v) f(v)=0\), that is, f is orthogonal to the constants. Thus, by (4),

$$\begin{aligned} \lambda _{n-1}&\ge {{\,\mathrm{RQ}\,}}(f)\\&=\frac{\sum _{e\in E}\bigl (\sum _{v\in e}f(v)\bigr )^2}{\sum _{v\in V} \text{deg} (v)f(v)^2}\\&=\frac{\sum _{e\in E}\bigl (\sum _{v\in e\cap S}1-\sum _{v\in e\cap \bar{S}}\alpha \bigr )^2}{{{\,\mathrm{vol}\,}}(S)+\alpha ^2{{\,\mathrm{vol}\,}}(\bar{S})}\\ \bigl (\text {by }\alpha ^2{{\,\mathrm{vol}\,}}(\bar{S})=\alpha {{\,\mathrm{vol}\,}}(S) \bigr )&=\frac{\sum _{e\in E}\bigl (|e\cap S|-\alpha |e\cap \bar{S}|\bigr )^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}\\ \bigl (\text {by }|e|=k\,\,\forall e\in E \bigr )&=\frac{\sum _{e\in E}\bigl (k-|e\cap \bar{S}| -\alpha |e\cap \bar{S}|\bigr )^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}\\&=\frac{\sum _{e\in E}\bigl (k-|e\cap \bar{S}|\cdot (\alpha +1) \bigr )^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}\\&=\frac{|E| k^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}+\frac{\sum _{e\in E}|e\cap \bar{S}|^2(\alpha +1)}{{{\,\mathrm{vol}\,}}(S)}-2k\cdot \frac{\sum _{e\in E}|e\cap \bar{S}| }{{{\,\mathrm{vol}\,}}(S)}\\ \bigl (\text {by }\sum _{e\in E}|e\cap \bar{S}|={{\,\mathrm{vol}\,}}(\bar{S}) \bigr )\quad&=\frac{|E| k^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}+\frac{\sum _{e\in E}|e\cap \bar{S}|^2(\alpha +1)}{{{\,\mathrm{vol}\,}}(S)}-\frac{2k}{\alpha }\\&=\frac{|E| k^2}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}+\frac{\sum _{r=1}^k\sum _{e\in E:|e\cap \bar{S}|=r}r^2(\alpha +1)}{{{\,\mathrm{vol}\,}}(S)}-\frac{2k}{\alpha }\\ \bigl (\text {by }|E|k={{\,\mathrm{vol}\,}}(V) \bigr )\quad&=\frac{k\cdot {{\,\mathrm{vol}\,}}(V)}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}+ \frac{(\alpha +1)\cdot \sum _{r=1}^k|E_r(\bar{S})|r^2}{{{\,\mathrm{vol}\,}}(S)}-\frac{2k}{\alpha }. \end{aligned}$$

Now, observe that

$$\begin{aligned} \frac{k\cdot {{\,\mathrm{vol}\,}}(V)}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}=\frac{k\cdot ( {{\,\mathrm{vol}\,}}(S)+{{\,\mathrm{vol}\,}}(\bar{S}))}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}=\frac{k}{\alpha +1}+\frac{k}{\alpha (\alpha +1)} \end{aligned}$$

and we have that

$$\begin{aligned} \frac{k}{\alpha +1}+\frac{k}{\alpha (\alpha +1)}-\frac{2k}{\alpha }=\frac{k(\alpha +1-2\alpha -2)}{\alpha (\alpha +1)}=-\frac{k}{\alpha }. \end{aligned}$$

Therefore, by putting everything together,

$$\begin{aligned} \lambda _{n-1}&\ge {{\,\mathrm{RQ}\,}}(f)\\&=\frac{k\cdot {{\,\mathrm{vol}\,}}(V)}{(\alpha +1){{\,\mathrm{vol}\,}}(S)}+ \frac{(\alpha +1)\cdot \sum _{r=1}^k|E_r(\bar{S})|r^2}{{{\,\mathrm{vol}\,}}(S)}-\frac{2k}{\alpha }\\&=\frac{(\alpha +1)\cdot \sum _{r=1}^k|E_r(\bar{S})|r^2}{{{\,\mathrm{vol}\,}}(S)}-\frac{k}{\alpha }\\&= \frac{(\alpha +1)\cdot \sum _{r=1}^{k-1}|E_r(\bar{S})|r^2}{{{\,\mathrm{vol}\,}}(S)}+\frac{(\alpha +1)\cdot |E_k(\bar{S})|k^2}{{{\,\mathrm{vol}\,}}(S)}-\frac{k}{\alpha }\\&\ge \frac{(\alpha +1)\cdot \sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}+\frac{(\alpha +1)\cdot |E_k(\bar{S})|k}{{{\,\mathrm{vol}\,}}(S)}+\frac{(\alpha +1)\cdot |E_k(\bar{S})|k(k-1)}{{{\,\mathrm{vol}\,}}(S)}-\frac{k}{\alpha }\\&=\frac{(\alpha +1)\cdot \sum _{r=1}^{k}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}+\frac{(\alpha +1)\cdot |E_k(\bar{S})|k(k-1)}{{{\,\mathrm{vol}\,}}(S)}-\frac{k}{\alpha }. \end{aligned}$$

Now, since \({{\,\mathrm{vol}\,}}(\bar{S})=\sum _{r=1}^k|E_r(\bar{S})|r\),

$$\begin{aligned} |E_k(\bar{S})|k={{\,\mathrm{vol}\,}}(\bar{S})-\sum _{r=1}^{k-1}|E_r(\bar{S})|r. \end{aligned}$$

Hence,

$$\begin{aligned} \lambda _{n-1}&\ge \frac{(\alpha +1){{\,\mathrm{vol}\,}}(\bar{S})}{{{\,\mathrm{vol}\,}}(S)}+(\alpha +1)\cdot (k-1)\cdot \Biggl (\frac{|E_k(\bar{S})|k}{{{\,\mathrm{vol}\,}}(S)}\Biggr )-\frac{k}{\alpha }\\&= \frac{(\alpha +1){{\,\mathrm{vol}\,}}(\bar{S})}{{{\,\mathrm{vol}\,}}(S)}+(\alpha +1)\cdot (k-1)\cdot \Biggl (\frac{{{\,\mathrm{vol}\,}}(\bar{S})}{{{\,\mathrm{vol}\,}}(S)}-\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}\Biggr )-\frac{k}{\alpha }\\&=\frac{(\alpha +1)}{\alpha }-\frac{k}{\alpha }+(\alpha +1)\cdot (k-1)\cdot \Biggl (\frac{1}{\alpha }-\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}\Biggr )\\&=\frac{\alpha +1-k+(\alpha +1)(k-1)}{\alpha }-(\alpha +1)\cdot (k-1)\cdot \Biggl (\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}\Biggr )\\&=k-(\alpha +1)\cdot (k-1)\cdot \Biggl (\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}\Biggr )\\ \bigl (\text {by }\alpha +1\le 2\bigr )\quad&\ge k-2(k-1)\Biggl (\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r}{{{\,\mathrm{vol}\,}}(S)}\Biggr )\\&\ge k-2(k-1)\Biggl (\frac{\sum _{r=1}^{k-1}|E_r(\bar{S})|r(k-r)}{{{\,\mathrm{vol}\,}}(S)}\Biggr )\\&=k-2(k-1)h. \end{aligned}$$

The claim follows. \(\square \)

5 Proof of the Lower Bound

Theorem 4

Let \(\varGamma \) be a connected, k-uniform hypergraph. Then,

$$\begin{aligned} k-\lambda _{n-1}\ge \frac{1}{2(k-1)} h^2. \end{aligned}$$

Proof

We follow and generalize the proof method of [11, Theorem 2.2].

Let f be an eigenfunction for L with eigenvalue \(\lambda _{n-1}\). Without loss of generality, we relabel the vertices so that

$$\begin{aligned} f(v_i)\ge f(v_{i+1}),\,\text { for }i=1,\ldots ,n-1. \end{aligned}$$

Let \(S_i:=\{v_1,\ldots ,v_i\}\) and let

$$\begin{aligned} t:=\max \{i:{{\,\mathrm{vol}\,}}(S_i)\le {{\,\mathrm{vol}\,}}(\bar{S_i})\}. \end{aligned}$$

Since we are assuming (3), t is well defined. Now, since f is orthogonal to the constants, \(\sum _{v\in V}f(v) \text{deg} (v)=0\). Hence,

$$\begin{aligned} \sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2&=\sum _{v\in V} \text{deg} (v) f(v)^2+f(v_t)^2{{\,\mathrm{vol}\,}}(V)\\&\ge \sum _{v\in V} \text{deg} (v) f(v)^2. \end{aligned}$$

This implies that

$$\begin{aligned} k-\lambda _{n-1}&=k-\frac{\sum _{e\in E}\bigl (\sum _{v\in e}f(v)\bigr )^2}{\sum _{v\in V} \text{deg} (v)f(v)^2}\\&=\frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2-\sum _{e\in E}\bigl (\sum _{v\in e}f(v)\bigr )^2}{\sum _{v\in V} \text{deg} (v)f(v)^2}\\&\ge \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2-\sum _{e\in E}\bigl (\sum _{v\in e}f(v)\bigr )^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2}. \end{aligned}$$

Now, for \(v\in V\), let

$$\begin{aligned} f_+(v):={\left\{ \begin{array}{ll}f(v)+f(v_t) &{}\text {if }f(v)+f(v_t)\ge 0\\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

and let

$$\begin{aligned} f_-(v):={\left\{ \begin{array}{ll}|f(v)+f(v_t)| &{}\text {if }f(v)+f(v_t)\le 0\\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Then,

$$\begin{aligned} f_+(v)+f_-(v)=|f(v)+f(v_t)| \quad \forall v\in V; \end{aligned}$$
(5)

similarly

$$\begin{aligned} f_+(v)^2+f_-(v)^2=\biggl (f(v)+f(v_t)\biggr )^2 \quad \forall v\in V \end{aligned}$$
(6)

and, by (5),

$$\begin{aligned} \sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2=\sum _{v\in V} \text{deg} (v)\biggl (f_+(v)^2+f_-(v)^2\biggr ). \end{aligned}$$

Moreover,

$$\begin{aligned} \sum _{e\in E}\Biggl (\sum _{v\in e}f(v)\Biggr )^2\le \sum _{e\in E}\Biggl (\biggl (\sum _{v\in e}f_+(v)\biggr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\Biggr )-|E|k^2f(v_t)^2. \end{aligned}$$
(7)

To see this, observe first that, for each \(e\in E\),

$$\begin{aligned}&\biggl (\sum _{v\in e}f_+(v)\biggr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\\&=\sum _{v\in e}\biggl (f_+(v)^2+f_-(v)^2\biggr )+2\sum _{v\ne w:\{v,w\}\subseteq e}\biggl (f_+(v)f_+(w)+f_-(v)f_-(w)\biggr )\\ \bigl (\text {by }(6)\bigr )\quad&=\sum _{v\in e}\biggl (f(v)+f(v_t)\biggr )^2+2\sum _{v\ne w:\{v,w\}\subseteq e}\biggl (f_+(v)f_+(w)+f_-(v)f_-(w)\biggr )\\ \bigl (\text {by construction of }f\bigr )\quad&\ge \sum _{v\in e}\biggl (f(v)+f(v_t)\biggr )^2+2\sum _{v\ne w:\{v,w\}\subseteq e}\biggl (f(v)+f(v_t)\biggr )\biggl (f(w)+f(v_t)\biggr )\\&=\sum _{v\in e}f(v)^2+k\cdot f(v_t)^2+2f(v_t)\cdot \sum _{v\in e}f(v)+\\&\quad +2\sum _{v\ne w:\{v,w\}\subseteq e}\Biggl (f(v)f(w)+f(v_t)f(v)+f(v_t)f(w)+f(v_t)^2\Biggr )\\ \bigl (\text {since }|e|=k\,\,\forall e\in E\bigr )\quad&=\sum _{v\in e}f(v)^2+k\cdot f(v_t)^2+2f(v_t)\cdot \sum _{v\in e}f(v)+\\&\quad +2\sum _{v\ne w:\{v,w\}\subseteq e}f(v)f(w)+2f(v_t)(k-1)\sum _{v\in e}f(v)+k(k-1)f(v_t)^2\\&=\sum _{v\in e}f(v)^2+2\sum _{v\ne w:\{v,w\}\subseteq e}f(v)f(w)+2f(v_t)k\cdot \sum _{v\in e}f(v)+k^2f(v_t)^2\\&=\Biggl (\sum _{v\in e}f(v)\Biggr )^2+2f(v_t)k\cdot \sum _{v\in e}f(v)+k^2f(v_t)^2. \end{aligned}$$

In going from the third to the fourth line, we used the fact that, by definition of f,

$$\begin{aligned} \biggl (f_+(v)f_+(w)+f_-(v)f_-(w)\biggr )\ge \biggl (f(v)+f(v_t)\biggr )\biggl (f(w)+f(v_t)\biggr ), \text { for }v\ne w. \end{aligned}$$
(8)

This can be seen in more detail by considering the following three cases.

  • Case 1:

    $$\begin{aligned} f(v)+f(v_t)\ge 0 \text { and }f(w)+f(v_t)\ge 0. \end{aligned}$$

    In this case, by definition of f,

    $$\begin{aligned} f_+(v)f_+(w)+f_-(v)f_-(w)= f_+(v)f_+(w)= \biggl (f(v)+f(v_t)\biggr )\biggl (f(w)+f(v_t)\biggr ). \end{aligned}$$
  • Case 2:

    $$\begin{aligned} f(v)+f(v_t)\le 0 \text { and }f(w)+f(v_t)\le 0. \end{aligned}$$

    In this case, by definition of f,

    $$\begin{aligned} f_+(v)f_+(w)+f_-(v)f_-(w)= f_-(v)f_-(w)= \biggl |f(v)+f(v_t)\biggr |\cdot \biggl |f(w)+f(v_t)\biggr |. \end{aligned}$$
  • Case 3:

    $$\begin{aligned} f(v)+f(v_t)\ge 0 \text { and }f(w)+f(v_t)\le 0, \text { or vice versa}. \end{aligned}$$

    In this case, by definition of f,

    $$\begin{aligned} f_+(v)f_+(w)+f_-(v)f_-(w)=0, \end{aligned}$$

    while

    $$\begin{aligned} \biggl (f(v)+f(v_t)\biggr )\biggl (f(w)+f(v_t)\biggr )\le 0. \end{aligned}$$

This proves (8). Therefore,

$$\begin{aligned}&\sum _{e\in E}\Biggl (\biggl (\sum _{v\in e}f_+(v)\biggr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\Biggr )\\&\quad \ge \sum _{e\in E}\Biggl (\Biggl (\sum _{v\in e}f(v)\Biggr )^2+2f(v_t)k\cdot \sum _{v\in e}f(v)+k^2f(v_t)^2\Biggr )\\&\quad =\sum _{e\in E}\Biggl (\sum _{v\in e}f(v)\Biggr )^2+2f(v_t)k\cdot \sum _{v\in V} \text{deg} (v)f(v)+|E|k^2f(v_t)^2\\&\quad \quad \Biggl (\text {by }\sum _{v\in V} \text{deg} (v)f(v)=0\Biggr )=\sum _{e\in E}\Biggl (\sum _{v\in e}f(v)\Biggr )^2+|E|k^2f(v_t)^2. \end{aligned}$$

Hence,

$$\begin{aligned} \sum _{e\in E}\Biggl (\sum _{v\in e}f(v)\Biggr )^2\le \sum _{e\in E}\Biggl (\biggl (\sum _{v\in e}f_+(v)\biggr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\Biggr )-|E|k^2f(v_t)^2. \end{aligned}$$

This proves (7). By putting everything together,

$$\begin{aligned} k-\lambda _{n-1}&\ge \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2-\sum _{e\in E}\Biggl (\sum _{v\in e}f(v)\Biggr )^2}{\sum _{v\in V} \text{deg} (v)\Biggl (f(v)+f(v_t)\Biggr )^2}\\&\ge \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2-\sum _{e\in E}\Biggl (\bigl (\sum _{v\in e}f_+(v)\bigr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\Biggr )}{\sum _{v\in V} \text{deg} (v)\biggl (f_+(v)^2+f_-(v)^2\biggr )}\\&=\frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2}-\frac{\sum _{e\in E}\Biggl (\biggl (\sum _{v\in e}f_+(v)\biggr )^2+\biggl (\sum _{v\in e}f_-(v)\biggr )^2\Biggr )}{\sum _{v\in V} \text{deg} (v)\biggl (f_+(v)^2+f_-(v)^2\biggr )}\\&\ge \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2}-\max \{{{\,\mathrm{RQ}\,}}(f_+),{{\,\mathrm{RQ}\,}}(f_-)\}, \end{aligned}$$

since

$$\begin{aligned} \frac{a+b}{c+d}\le \max \Bigg \{\frac{a}{c},\frac{b}{d}\Bigg \}. \end{aligned}$$

Now assume, without loss of generality, that \({{\,\mathrm{RQ}\,}}(f_+)\ge {{\,\mathrm{RQ}\,}}(f_-)\). Then,

$$\begin{aligned} k-\lambda _{n-1}&\ge \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2}-{{\,\mathrm{RQ}\,}}(f_+). \end{aligned}$$

Now, by the orthogonality to the constants and since \({{\,\mathrm{vol}\,}}(V)=\sum _{v\in V} \text{deg} (v)=|E|k\),

$$\begin{aligned} \frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)+f(v_t)\biggr )^2}&=\frac{k\cdot \sum _{v\in V} \text{deg} (v)f(v)^2+|E|k^2f(v_t)^2}{\sum _{v\in V} \text{deg} (v)\biggl (f(v)^2+f(v_t)^2\biggr )}\\&=k\cdot \frac{\sum _{v\in V} \text{deg} (v)f(v)^2+|E|kf(v_t)^2}{\sum _{v\in V} \text{deg} (v)f(v)^2+|E|kf(v_t)^2}\\&=k. \end{aligned}$$

Hence, by letting E(vw) denote the set of edges that contain both v and w,

$$\begin{aligned} k-\lambda _{n-1}&\ge k-{{\,\mathrm{RQ}\,}}(f_+)\\&=k-\frac{\sum _{e\in E}\biggl (\sum _{v\in e}f_+(v)\biggr )^2}{\sum _{v\in V} \text{deg} (v)f_+(v)^2}\\&=\frac{k\cdot \biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )-\sum _{e\in E}\biggl (\sum _{v\in e}f_+(v)^2+2\sum _{\{v,w\}\subseteq e:v\ne w}f_+(v)f_+(w)\biggr )}{\sum _{v\in V} \text{deg} (v)f_+(v)^2}\\&=\frac{(k-1)\cdot \biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )-2\sum _{v\ne w}|E(v,w)|f_+(v)f_+(w)}{\sum _{v\in V} \text{deg} (v)f_+(v)^2}\\&=\frac{\sum _{e\in E}\sum _{\{v,w\}\subseteq e}\biggl (f_+(v)-f_+(w)\biggr )^2}{\sum _{v\in V} \text{deg} (v)f_+(v)^2}\\&=\frac{\sum _{e\in E}\sum _{\{v,w\}\subseteq e}\biggl (f_+(v)-f_+(w)\biggr )^2}{\sum _{v\in V} \text{deg} (v)f_+(v)^2}\cdot \frac{\sum _{e\in E}\sum _{\{v,w\}\subseteq e}\biggl (f_+(v)+f_+(w)\biggr )^2}{\sum _{e\in E}\sum _{\{v,w\}\subseteq e}\biggl (f_+(v)+f_+(w)\biggr )^2}\\&\ge \frac{\Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)-f_+(w)\biggr )^2\Biggr )\cdot \Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)+f_+(w)\biggr )^2\Biggr )}{2(k-1)\biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )^2}, \end{aligned}$$

using the inequality \( \bigl (f_+(v)+f_+(w)\bigr )^2 \le 2 \bigl (f_+(v)^2+f_+(w)^2\bigr )\) in the denominator.Now, by the Cauchy–Schwarz inequality, the numerator in the last line is such that

$$\begin{aligned}&\Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)-f_+(w)\biggr )^2\Biggr )\cdot \Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)+f_+(w)\biggr )^2\Biggr ) \\&\ge \Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)-f_+(w)\biggr )\biggl (f_+(v)+f_+(w)\biggr )\Biggr )^2\\&=\Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)^2-f_+(w)^2\biggr )\Biggr )^2. \end{aligned}$$

Hence,

$$\begin{aligned} k-\lambda _{n-1}\ge \frac{\Biggl (\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)^2-f_+(w)^2\biggr )\Biggr )^2}{2(k-1)\biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )^2}. \end{aligned}$$

Now,

$$\begin{aligned}&\sum _{v\ne w}|E(v,w)|\cdot \biggl (f_+(v)^2-f_+(w)^2\biggr )\\&\quad =\sum _{a<c}|E(v_a,v_c)|\cdot \biggl (f_+(v_a)^2-f_+(v_c)^2\biggr )\\&\quad =\sum _{a<c}|E(v_a,v_c)|\cdot \Biggl (\sum _{i=a}^{c-1}f_+(v_i)^2-f_+(v_{i+1})^2\Biggr )\\&\quad =\sum _{a<c}\sum _{i=a}^{c-1}|E(v_a,v_c)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\\&\quad =\sum _{i=1}^{n-1}\sum _{a\le i}\sum _{c>i}|E(v_a,v_c)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\\&\quad =\sum _{i=1}^{n-1}\sum _{v_a\in S_i}\sum _{v_c\in \bar{S_i}}|E(v_a,v_c)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\\&\quad =\sum _{i=1}^{n-1} \sum _{r=1}^{k-1}r(k-r)|E_r(S_i)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr ). \end{aligned}$$

It follows that

$$\begin{aligned} k-\lambda _{n-1}\ge \frac{\Biggl (\sum _{i=1}^{n-1}\sum _{r=1}^{k-1}r(k-r)|E_r(S_i)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\Biggr )^2}{2(k-1)\biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )^2}. \end{aligned}$$

Now, for each \(i=1,\ldots ,n-1\), we let

$$\begin{aligned} |\delta (S_i)|:=\sum _{r=1}^{k-1}r(k-r)|E_r(S_i)| \quad \text {and}\quad \widetilde{{{\,\mathrm{vol}\,}}}(S_i):=\min \{{{\,\mathrm{vol}\,}}(S_i),{{\,\mathrm{vol}\,}}(\bar{S_i})\}, \end{aligned}$$

so that

$$\begin{aligned} h(S_i)=\frac{|\delta (S_i)|}{\widetilde{{{\,\mathrm{vol}\,}}}(S_i)}\ge h. \end{aligned}$$

Then,

$$\begin{aligned}&\Biggl (\sum _{i=1}^{n-1}\sum _{r=1}^{k-1}r(k-r)|E_r(S_i)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\Biggr )^2\\&\quad =\Biggl (\sum _{i=1}^{n-1}|\delta (S_i)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\Biggr )^2\\&\quad \ge \Biggl (\sum _{i=1}^{n-1} h\cdot \widetilde{{{\,\mathrm{vol}\,}}}(S_i)\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\Biggr )^2\\&\quad =h^2\cdot \Biggl (\widetilde{{{\,\mathrm{vol}\,}}}(S_1)f_+(v_1)^2+\sum _{i=2}^n \biggl (\widetilde{{{\,\mathrm{vol}\,}}}(S_i)-\widetilde{{{\,\mathrm{vol}\,}}}(S_{i-1})\biggr )f_+(v_i)^2 \Biggr )^2\\&\quad =h^2\cdot \Biggl (\sum _{i=1}^n \text{deg} (v_i) f_+(v_i)^2 \Biggr )^2, \end{aligned}$$

where in the last line we have used the assumption (3). Putting everything together,

$$\begin{aligned} k-\lambda _{n-1}&\ge \frac{\Biggl (\sum _i\sum _{r=1}^{k-1}r(k-r)|E_r(S_i)|\cdot \biggl (f_+(v_i)^2-f_+(v_{i+1})^2\biggr )\Biggr )^2}{2(k-1)\biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )^2}\\&\ge \frac{h^2}{2(k-1)}\cdot \frac{ \Biggl (\sum _{i=1}^n \text{deg} (v_i)f_+(v_i)^2\Biggr )^2}{\biggl (\sum _{v\in V} \text{deg} (v)f_+(v)^2\biggr )^2}\\&=\frac{h^2}{2(k-1)}. \end{aligned}$$

\(\square \)