Introduction

Oriented hypergraphs are hypergraphs with the additional structure that each vertex in a hyperedge is either an input, an output or both. They have been introduced in [21], together with two normalized Laplace operators whose spectral properties and possible applications have been investigated also in further works [1, 31,32,33]. Here we generalize the Laplace operators on oriented hypergraphs by introducing, for each \(p\in \mathbb {R}_{\geq 1}\), two p-Laplacians. While the vertex p-Laplacian is a known operator for graphs (see for instance [5, 14, 39]), to the best of our knowledge the only edge p-Laplacian for graphs that has been defined is the classical one for p = 2.

Structure of the Paper

In Section 1.1, for completeness of the theory, we discuss the p-Laplacian on Euclidean domains and Riemannian manifolds, and in Section 1.2 we recall the basic notions on oriented hypergraphs. In Section 2 we define the p-Laplacians for p > 1 and we establish their generalized min-max principle, and similarly, in Section 3, we introduce and discuss the 1-Laplacians for oriented hypergraphs. Furthermore, in Section 4 we discuss the smallest and largest eigenvalues of the p-Laplacians for all p, in Section 5 we prove two nodal domain theorems, and in Section 6 we discuss the smallest nonzero eigenvalue. Finally, in Section 7 we discuss several vertex partition problems and their relations to the p-Laplacian eigenvalues, while in Section 8 we discuss hyperedge partition problems.

In [23] we shall build upon the results developed in this paper.

Related Work

It is worth mentioning that, in [18], other vertex p-Laplacians for hypergraphs have been introduced and studied. While these generalized vertex p-Laplacians coincide with the ones that we introduce here in the case of graphs, they do not coincide for general hypergraphs. Also, [18] focuses on classical hypergraphs, while we consider, more generally, oriented hypergraphs.

The p-Laplacian on Euclidean Domains and Riemannian Manifolds

There is a strong analogy between Laplace operators on Euclidean domains and Riemannian manifolds on one hand and their discrete versions on graphs and hypergraphs, and this is also some motivation for our work. Therefore, it may be useful to briefly summarize the theory on Euclidean domains and Riemannian manifolds.

Let \({{\varOmega }} \subset \mathbb {R}^{n}\) be a bounded domain, with piecewise Lipschitz boundary Ω, in order to avoid technical issues that are irrelevant for our purposes. More generally, Ω could also be such a domain in a Riemannian manifold.

Let first \(1<p<\infty \). For u in the Sobolev space W1,p(Ω), we may consider the functional

$$ I_{p}(u)={\int}_{{{\varOmega}}} |\nabla u|^{p} dx. $$
(1)

Its Euler–Lagrange operator is the p-Laplacian

$$ {\Delta}_{p} u= -\text{div}(|\nabla u|^{p-2}\nabla u); $$
(2)

for p = 2, we have, of course, the standard Laplace operator. Note that we use the − sign in (2) both to make the operator a positive one and to conform to the conventions used in this paper. The eigenvalue problem arises when we look for critical points of Ip under the constraint

$$ {\int}_{{{\varOmega}}} |u|^{p} dx =1, $$
(3)

or equivalently, if we seek critical points of the Rayleigh quotient

$$ \frac{{\int}_{{{\varOmega}}} |\nabla u|^{p} dx}{{\int}_{{{\varOmega}}} |u|^{p} dx} $$

among functions u≢0. To make the problem well formulated, we need to impose a boundary condition, and we consider here the Dirichlet condition

$$ u \equiv 0~\text{ on }~\partial {{\varOmega}}. $$

On a compact Riemannian manifold M with boundary M, we can do the same when we integrate in (1), (3) with respect to the Riemannian volume measure, and let ∇ and div denote the Riemannian gradient and divergence operators. When M = , we do not need to impose a boundary condition.

Eigenfunctions and eigenvalues then have to satisfy the equation

$$ {\Delta}_{p} u= \lambda |u|^{p-2}u. $$
(4)

For \(1<p <\infty \), the functionals in (1) and (3) are strictly convex, and the spectral theory is similar to that for p = 2, that is, the case of the ordinary Laplacian, which is a well studied subject. (See for instance [41] for the situation on a Riemannian manifold). For p = 1, however, the functionals are no longer strictly convex, and things get more complicated. (4) then formally becomes

$$ -\text{div}\left( \frac{\nabla u}{|\nabla u|}\right)=\lambda \frac{u}{|u|}. $$
(5)

In (4) for p > 1, we may put the right hand side = 0 at points where u = 0, but this is no longer possible in (5). This eigenvalue problem has been studied by Kawohl, Schuricht and their students and collaborators, as well as by Chang, and we shall summarize their results. Some references are [7, 24,25,26,27,28,29,30, 34, 37].

One therefore formally replaces (5) by defining substitute z of \(\frac {\nabla u}{|\nabla u|}\) and a substitute s of \(\frac {u}{|u|}\), leading to

$$ -\text{div} z=\lambda s, $$
(6)

where \(s\in L^{\infty }({{\varOmega }})\) satisfies

$$ s(x)\in \text{Sgn}(u(x)) $$

with

$$ \text{Sgn}(t):= \left\{\begin{array}{ll} \{1\} &\quad \text{if } t>0,\\ {[-1,1]} &\quad \text{if }t=0,\\ \{-1\} &\quad \text{if }t<0, \end{array}\right. $$

and the vector field \(z\in L^{\infty }({{\varOmega }},\mathbb {R}^{n})\) satisfies

$$ \|z\|_{\infty}=1,\quad \text{div} z \in L^{n}({{\varOmega}}),\quad -{\int}_{{{\varOmega}}} u \text{div} z dx= I(u), $$

where

$$ I(u)={\int}_{{{\varOmega}}} |Du|dx + {\int}_{\partial {{\varOmega}}} |u^{\partial {{\varOmega}}}|d\mathcal{H}^{n-1}. $$
(7)

Again (7) needs some explanation. In fact, while for p > 1, the natural space to work in is W1,p(Ω), for p = 1, it is no longer W1,1(Ω), but rather BV (Ω). This space (for a short introduction, see for instance [20]) consists of all functions L1(Ω) for which

$$ |Du|({{\varOmega}})=\sup\left\{{\int}_{{{\varOmega}}} u \text{div} gdx:~g=(g^{1},{\ldots} g^{n})\in C_{0}^{\infty}({{\varOmega}},\mathbb{R}^{n}), |g(x)|\le1\text{ for all } x\in{{\varOmega}}\right\}<\infty. $$

Note that when uC1(Ω), we have

$$ {\int}_{{{\varOmega}}} u \text{div} gdx= -{\int}_{{{\varOmega}}} \sum\limits_{i} g^{i} \frac{\partial u}{\partial x^{i}} dx, $$

and thus, BV -functions permit such an integration by parts in a weak sense. More precisely, for a BV -function u, its distributional gradient is represented by a finite \(\mathbb {R}^{n}\) valued signed measure |Du|dx, and we can write

$$ {\int}_{{{\varOmega}}} u\text{div} gdx= -{\int}_{{{\varOmega}}} g|Du|dx \quad\text{ for }~g\in C_{0}^{\infty}({{\varOmega}},\mathbb{R}^{n}). $$
(8)

Also, uBV (Ω) has a well-defined trace uΩL1(Ω), and (8) generalizes to

$$ {\int}_{{{\varOmega}}} u\text{div} h dx= -{\int}_{{{\varOmega}}} h|Du|dx +{\int}_{\partial {{\varOmega}}}u^{\partial {{\varOmega}}} (h\nu)d\mathcal{H}^{n-1}\quad\text{ for } h\in C^{1}({{\varOmega}},\mathbb{R}^{n})\cap C(\overline{{{\varOmega}}},\mathbb{R}^{n}) $$

where ν is the outer unit normal of Ω. Importantly, BV -functions can be discontinuous along hypersurfaces. A Borel set EΩ has finite perimeter if its characteristic function χE satisfies

$$ |D\chi_{E}|({{\varOmega}})\left( = \sup\left\{{\int}_{E}\text{div} g: g\in C_{0}^{\infty}({{\varOmega}},\mathbb{R}^{n}), |g|\le1\right\}\right)<\infty. $$

For instance, if the boundary of E is a compact Lipschitz hypersurface, then the perimeter of E is simply the Hausdorff measure \({\mathscr{H}}^{n-1}(\partial E)\). And if EΩ, we have

$$ |D\chi_{E}|:=|D\chi_{E}|(\mathbb{R}^{n})= |D\chi_{E}|({{\varOmega}}) + \mathcal{H}^{n-1}(\partial E \cap \partial {{\varOmega}}). $$

The problem with (6), however, is that in general it has too many solutions, as it becomes rather arbitrary on sets of positive measure where u vanishes, see [30]. The solutions that one is really interested in should be the critical points of a variational principle, with the vanishing of the weak slope of [11] as the appropriate criterion. Inner variations provide another necessary criterion [30]. Viscosity solutions provide another criterion which, however, is still not stringent enough [26].

The Cheeger constant of Ω then is defined as

$$ h_{1}({{\varOmega}}):= \inf_{E\subset \overline{{{\varOmega}}}}\frac{|D\chi_{E}|}{|E|}, $$
(9)

where |E| is the Lebesgue measure of E. A set realizing the infimum in (9) is called a Cheeger set, and every bounded Lipschitz domain Ω possesses at least one Cheeger set. For such a Cheeger set EΩ, EΩ is smooth except possibly for a singular set of Hausdorff dimension at most n − 8 and of constant mean curvature \(\frac {1}{n-1}h_{1}({{\varOmega }})\) at all regular points. When Ω is not convex, its Cheeger set need not be unique.

In fact, h1(Ω) equals the first eigenvalue of the 1-Laplacian. More precisely,

$$ h_{1}({{\varOmega}})= \inf_{u\in BV({{\varOmega}}), u\not\equiv 0}\frac{{\int}_{{{\varOmega}}} |Du|dx + {\int}_{\partial {{\varOmega}}} |u^{\partial {{\varOmega}}}|d\mathcal{H}^{n-1}}{{\int}_{{{\varOmega}}} |u|dx}=:\lambda_{1,1}({{\varOmega}}) $$

is the smallest λ≠ 0 for which there is a nontrivial solution u of (5), and such a u is of the form χE for a Cheeger set, up to a multiplicative factor, of course. Also, if λ1,p(Ω) denotes the smallest nonzero eigenvalue of (4), then

$$ \lim_{p\to 1^{+}}\lambda_{1,p}({{\varOmega}})=\lambda_{1,1}({{\varOmega}}). $$

We also have the lower bound

$$ \lambda_{1,p}({{\varOmega}})\ge \left( \frac{h_{1}({{\varOmega}})}{p}\right)^{p} $$

generalizing the original Cheeger bound for p = 2.

More generally, for any family of eigenvalues λk,p(Ω) of (4), \(\lim _{p\to 1^{+}}\lambda _{k,p}({{\varOmega }})\) is an eigenvalue of (5). The converse is not true, however; (5) may have more solutions than can be obtained as limits of solutions of (4).

The functional |Du| appears also in image denoising, in so-called TV models (where the acronym TV refers to the fact that |Du|(Ω) is the total variation of the measure |Du|dx) introduced in [36]. There, one wants to denoise a function \(f:{{\varOmega }} \to \mathbb {R}\) by smoothing it, and in the TV models, one wants to minimize a functional of the form

$$ {\int}_{{{\varOmega}}} |Du| dx + \mu {\int}_{{{\varOmega}}} |u-f|dx. $$
(10)

\({\int \limits }_{{{\varOmega }}} |u-f|\) is the so-called fidelity term that controls the deviation of the denoised version u from the given data f. μ > 0 is a parameter that balances the smoothness and the fidelity term. Formally, a minimizer u has to satisfy an equation of the form

$$ \text{div}\left( \frac{D u}{|D u|}\right)=\mu \frac{u-f}{|u-f|} $$

which is similar to (5). It turns out, however, that when such a model is applied to actual data, the performance is not so good, and it has been found preferable to modify (10) to what is called a nonlocal model in image processing [16]. In [19], such a model was derived from geometric considerations, and this may also provide some insight into the relation with the discrete models considered in this paper, we now recall the construction of that reference.

Let Ω be a domain in \(\mathbb {R}^{n}\) or some more abstract space, and \(\omega :{{\varOmega }} \times {{\varOmega }} \to \mathbb {R}\) a nonnegative, symmetric function. ω(x,y) can be interpreted as some kind of edge weight between the points x,y for any pair (x,y) ∈Ω×Ω. Here x,y can also stand for patches in the image, and in our setting, they could also be vertices in a graph (in which case the integrals below would become sums). We define the average \(\bar {\omega }:{{\varOmega }} \to \mathbb {R}\) of ω by

$$ \bar{\omega}(x)= {\int}_{{{\varOmega}}} \omega(x,y) dy $$

and assume that \(\bar {\omega }\) is positive almost everywhere. On a graph, while ω is an edge function, \(\bar {\omega }\) would be a vertex function, \(\bar {\omega }(x)\) being the degree of the vertex x with edge weights ω(x,y). We first use \(\bar {\omega }(x)\) and ω(x,y) to define the L2-norms for functions \(u:{{\varOmega }}\to \mathbb {R}\) and vector fields p, that is, \(p:{{\varOmega }}\times {{\varOmega }} \to \mathbb {R}\),

$$ \begin{array}{@{}rcl@{}} (u_{1}, u_{2})_{L^{2}} &:=&{\int}_{{{\varOmega}}} u_{1} (x) u_{2}(x)\bar{\omega} (x) dx,\\ (p_{1},p_{2} )_{L^{2}} &:=&{\int}_{{{\varOmega}}\times{{\varOmega}}} p_{1}(x,y) p_{2}(x,y)\omega(x,y) dxdy \end{array} $$

and the corresponding norms |u| and |p|.

The discrete derivative of a function (an image) \(u:{{\varOmega }}\to \mathbb {R}\) is defined by

$$ Du(x,y)=u(y)-u(x). $$

Even though Du does not depend on ω, it is in some sense analogous to a gradient, as we shall see below. Its pointwise norm then is given

$$ |D u|(x) =\left( \frac 1{\bar{\omega} (x)} {\int}_{{{\varOmega}}}(u(y)-u(x))^{2} \omega(x,y) dy\right)^{\frac 12}. $$

The divergence of a vector field \(p:{{\varOmega }} \times {{\varOmega }} \to \mathbb {R}\) is defined by

$$ \text{div} p(x):= \frac 1{\bar{\omega}(x)} {\int}_{{{\varOmega}}} (p(x,y)-p(y,x)) \omega(x,y) dy. $$

Note that, in contrast to Du for a function u, the divergence of a vector field depends on the weight ω. For \(u:{{\varOmega }}\to \mathbb {R}\) and \(p:{{\varOmega }}\times {{\varOmega }}\to \mathbb {R}\), we then have

$$ (Du, p)_{L^{2}}= -(u, \text{div} p)_{L^{2}}, $$

the analog of (8).

With the vector field Du and the divergence operator div, we can define a Laplacian for functions

$$ {\Delta} u(x):= -\text{div}(D u)= u(x)- \frac 1{\bar{\omega}(x)} {\int}_{{{\varOmega}}} u(y) \omega(x,y) dy, $$

which in the case of a graph is the Laplacian we have been using. The nonlocal TV (or BV) functional of [19] then is

$$ \begin{array}{@{}rcl@{}} TV_{\omega}(u)&:=&{\int}_{{{\varOmega}}} |D u| \bar \omega(x) dx \\ &=& {\int}_{{{\varOmega}}} \left( {\int}_{{{\varOmega}}} (u(y)-u(x))^{2} \omega (x,y) d y\right)^{\frac{1}{2}} \sqrt {\bar{\omega}(x)} dx. \end{array} $$

This leads to the nonlocal TV model

$$ \begin{array}{@{}rcl@{}} ROF_{\omega}(u)&=& TV_{\omega}(u)+\mu {\int}_{{{\varOmega}}} |u-f| \bar{\omega} (x) dx\\ &=&{\int}_{{{\varOmega}}}\left( {\int}_{{{\varOmega}}}(u(y)-u(x))^{2} \omega (x,y) dy\right)^{\frac{1}{2}} \sqrt{\bar{\omega}(x)} dx \\ &&+\mu {\int}_{{{\varOmega}}}{\int}_{{{\varOmega}}} |u(x)-f(x)|\omega(x,y) dxdy. \end{array} $$

It should be of interest to explore such models on hypergraphs. That would offer the possibility to account not only for correlations between pairs, but also between selected larger sets of vertices, for instance three collinear ones.

Basic Notions on Hypergraphs

Definition 1.1

([38]) An oriented hypergraph is a pair Γ = (V,H) such that V is a finite set of vertices and H is a set such that every element h in H is a pair of disjoint elements (hin,hout) (input and output) in \(\mathcal {P}(V)\setminus \{\emptyset \}\). The elements of H are called the oriented hyperedges. Changing the orientation of a hyperedge h means exchanging its input and output, leading to the pair (hout,hin).

With a little abuse of notation, we shall see h as hinhout.

Definition 1.2

([33]) Given hH, we say that two vertices i and j are co-oriented in h if they belong to the same orientation sets of h; we say that they are anti-oriented in h if they belong to different orientation sets of h.

Definition 1.3

Given iV, we say that two hyperedges h and \(h^{\prime }\) contain i with the same orientation if \(i\in (h_{in}\cap h^{\prime }_{in})\cup (h_{out}\cap h^{\prime }_{out})\); we say that they contain i with opposite orientation if \(i\in (h_{in}\cap h^{\prime }_{out})\cup (h_{out}\cap h^{\prime }_{in})\).

Definition 1.4

([31]) The degree of a vertex i is

$$ \deg(i):=\#\text{ hyperedges containing \textit{i} only as an input or only as an output} $$

and the cardinality of a hyperedge h is

$$ \begin{array}{@{}rcl@{}} \# h&:=&\#\{(h_{in}\setminus h_{out})\cup (h_{out}\setminus h_{in})\}\\ &=&\#\text{ vertices in \textit{h} that are either only an input or only an output}. \end{array} $$

From now on, we fix such an oriented hypergraph Γ = (V,H) on n vertices 1,…,n and m hyperedges h1,…,hm. We assume that there are no vertices of degree zero. We denote by C(V ) the space of functions \(f:V\rightarrow \mathbb {R}\) and we denote by C(H) the space of functions \(\gamma :H\rightarrow \mathbb {R}\).

p-Laplacians for p > 1

Definition 2.1

Given \(p\in \mathbb {R}_{> 1}\), the (normalized) vertex p-Laplacian is Δp : C(V ) → C(V ), where

$$ {\Delta}_{p} f(i):=\frac{1}{\deg(i)} \sum\limits_{h\ni i}\left|\sum\limits_{i^{\prime}\text{ input of }h}f(i^{\prime})-\sum\limits_{i^{\prime\prime}\text{ output of }h}f(i^{\prime\prime})\right|^{p-2}\left( \sum\limits_{j\in h,o_{h}(i,j)=-1}\!\!\!f(j)-\sum\limits_{j^{\prime}\in h,o_{h}(i,j^{\prime})=1}\!\!\!f(j^{\prime})\right), $$

where

$$ o_{h}(i,j)=\left\{\begin{array}{ll} -1&\quad\text{ if }i,j\in h, i\text{ and }j \text{ are co-oriented in }h,\\ 1&\quad\text{ if }i,j\in h, i\text{ and }j \text{ are anti-oriented in }h,\\ 0,&\quad\text{ otherwise.} \end{array}\right. $$

We define its eigenvalue problem as

$$ {\Delta}_{p} f=\lambda |f|^{p-2}f. $$
(11)

We say that a nonzero function f and real number λ satisfying (11) are an eigenfunction and the corresponding eigenvalue for Δp.

Remark 2.2

Definition 2.1 generalizes both the graph p-Laplacian and the normalized Laplacian defined in [21] for hypergraphs, which corresponds to the case p = 2.

Remark 2.3

The p-Laplace operators for classical hypergraphs that were introduced in [18] coincide with the vertex p-Laplacians that we introduced here in the case of simple graphs, but not in the more general case of hypergraphs. In fact, the Laplacians in [18] are related to the Lovász extension, while the operators that we consider here are defined via the incidence matrix. Also, the corresponding functionals for the p-Laplacians in [18] are of the form

$$ f\mapsto \sum\limits_{h\in H}\max_{i,j\in h}|f(i)-f(j)|^{p}, $$

and these are non-smooth in general, even for p > 1. In our case, the corresponding functionals are of the form

$$ f\mapsto \sum\limits_{h\in H}\left|\sum\limits_{i\in h_{in}} f(i)-\sum\limits_{j\in h_{out}} f(j)\right|^{p}, $$

and these are smooth for p > 1.

Definition 2.4

Given \(p\in \mathbb {R}_{> 1}\), the (normalized) hyperedge p-Laplacian is \({{\Delta }^{H}_{p}}:C(H)\to C(H)\), where

$$ {{\Delta}_{p}^{H}}\gamma(h):=\sum\limits_{i\in h}\frac{1}{\deg (i)}\left|\sum\limits_{h^{\prime}\ni i\text{ as input }}\!\!\!\gamma(h^{\prime})-\sum\limits_{h^{\prime\prime}\ni i\text{ as output }}\!\!\!\gamma(h^{\prime\prime})\right|^{p-2}\!\!\!\left( \sum\limits_{h^{\prime}\ni i, o_{i}(h,h^{\prime})=-1}\!\!\!\gamma(h^{\prime})-\sum\limits_{h^{\prime\prime}\ni i, o_{i}(h,h^{\prime\prime})=1}\!\!\!\gamma(h^{\prime\prime})\right), $$

where

$$ o_{i}(h,h^{\prime})=\left\{\begin{array}{ll} -1&\quad\text{ if }h,h^{\prime}\ni i \text{ with the same orientation},\\ 1&\quad\text{ if }h,h^{\prime}\ni i \text{ with opposite orientation},\\ 0,&\quad\text{ otherwise.} \end{array}\right. $$

We define its eigenvalue problem as

$$ {{\Delta}^{H}_{p}} \gamma=\lambda |\gamma|^{p-2}\gamma. $$
(12)

We say that a nonzero function γ and a real number λ satisfying (12) are an eigenfunction and the corresponding eigenvalue for \({{\Delta }^{H}_{p}}\).

Remark 2.5

For p = 2, Definition 2.4 coincides with the one in [21]. Also, as we shall see, while it is known that the nonzero eigenvalues of Δp and \({{\Delta }_{p}^{H}}\) coincide for p = 2, this is no longer true for a general p.

Generalized Min-max Principle

For p = 2, the Courant–Fischer–Weyl min-max principle can be applied in order to have a characterizations of the eigenvalues of Δ2 and \({{\Delta }_{2}^{H}}\) in terms of the Rayleigh Quotients of the functions fC(V ) and γC(H), respectively, as shown in [21]. In this section we prove that, for p > 1, a generalized version of the min-max principle can be applied in order to know more about the eigenvalues of Δp and \({{\Delta }_{p}^{H}}\). Similar results are already known for graphs, as shown for instance in [40]. Before stating the main results of this section, we define the generalized Rayleigh Quotients for functions on the vertex set and for functions on the hyperedge set.

Definition 2.6

Let \(p\in \mathbb {R}_{\geq 1}\). Given fC(V ), its generalized Rayleigh Quotient is

$$ \text{RQ}_{p}(f):=\frac{{\sum\limits}_{h\in H}\left|{\sum\limits}_{i\text{ input of }h}f(i)-{\sum\limits}_{j\text{ output of }h}f(j)\right|^{p}}{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}. $$

Analogously, the generalized Rayleigh Quotient of γC(H) is

$$ \text{RQ}_{p}(\gamma):=\frac{{\sum}_{i\in V}\frac{1}{\deg (i)}\cdot \left|{\sum}_{h^{\prime}: v\text{ input}}\gamma(h^{\prime})-{\sum\limits}_{h^{\prime\prime}: v\text{ output}}\gamma(h^{\prime\prime})\right|^{p}}{{\sum\limits}_{h\in H}|\gamma(h)|^{p}}. $$

Remark 2.7

It is clear from the definition of RQp(f) and RQp(γ) that

$$ \text{RQ}_{\hat{p}}(f)=0~\text{ for some }\hat{p}\quad\Longleftrightarrow\quad\text{RQ}_{p}(f)=0~\text{ for all }p $$

and

$$ \text{RQ}_{\hat{p}}(\gamma)=0~\text{ for some }\hat{p}\quad\Longleftrightarrow\quad\text{RQ}_{p}(\gamma)=0~\text{ for all }p. $$

Theorem 2.8

Let \(p\in \mathbb {R}_{>1}\). fC(V ) ∖{0} is an eigenfunction for Δp with corresponding eigenvalue λ if and only if

$$ \nabla \text{RQ}_{p}(f)=0\quad \text{ and }\quad\lambda=\text{RQ}_{p}(f). $$

Similarly, γC(H) ∖{0} is an eigenfunction for \({{\Delta }^{H}_{p}}\) with corresponding eigenvalue μ if and only if

$$ \nabla \text{RQ}_{p}(\gamma)=0\quad \text{ and }\quad\lambda=\text{RQ}_{p}(\gamma). $$

Proof

For \(p\in \mathbb {R}_{>1}\), RQp is differentiable on \(\mathbb {R}^{n}\setminus 0\). Also,

$$ \begin{array}{@{}rcl@{}} \partial_{i}\text{RQ}_{p}(f)& = & \partial_{i}\left( \frac{{\sum\limits}_{h\in H}\left|{\sum\limits}_{i\text{ input of }h}f(i)-{\sum\limits}_{j\text{ output of }h}f(j)\right|^{p}}{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}\right)\\ & = &\frac{\partial_{i}\left( {\sum\limits}_{h}\left|{\sum\limits}_{i\in h_{in}}f(i)-{\sum\limits}_{j\in h_{out}}f(j)\right|^{p}\right)-\text{RQ}_{p}(f)\cdot \partial_{i}\left( {\sum\limits}_{i}\deg(i)|f(i)|^{p}\right)}{{\sum\limits}_{i}\deg(i)|f(i)|^{p}}\\ & = &\frac{p\cdot\deg(i)\cdot{\Delta}_{p}f(i)-\text{RQ}_{p}(f)\cdot p\cdot\deg(i)\cdot|f(i)|^{p-2}f(i)}{{\sum\limits}_{i}\deg(i)|f(i)|^{p}}\\ & = & p\cdot \deg(i)\cdot \frac{ {\Delta}_{p} f(i)-\text{RQ}_{p}(f) |f(i)|^{p-2}f(i)}{{\sum\limits}_{i=1}^{n} \deg(i)|f(i)|^{p}}, \end{array} $$

where we have used the fact that

$$ \partial_{t}|t|^{p}=p|t|^{p-1}\text{sign}(t)=p|t|^{p-2}t. $$

Hence,

$$ \begin{array}{@{}rcl@{}} \nabla \text{RQ}_{p}(f)=0~&\Longleftrightarrow&~ \partial_{i}\text{RQ}_{p}(f)=0\quad\forall i\\ &\Longleftrightarrow&~{\Delta}_{p} f=\text{RQ}_{p}(f) |f|^{p-2}f \\ &\Longleftrightarrow&~f \text{ is an eigenfunction for }{\Delta}_{p} \text{ with eigenvalue }\text{RQ}_{p}(f). \end{array} $$

Furthermore, if \(f^{\prime }\) is an eigenfunction corresponding to any eigenvalue λ, then Δpf = λ|f|p− 2f, therefore

$$ \langle{\Delta}_{p} f,f\rangle=\langle\lambda |f|^{p-2}f,f\rangle $$

which can be simplified as

$$ \text{RQ}_{p}(f)=\lambda. $$

This proves the claim for Δp. The case of \({{\Delta }^{H}_{p}}\) is similar. We have that

$$ \begin{array}{@{}rcl@{}} \partial_{h}\text{RQ}_{p}(\gamma)&=& \partial_{h} \left( \frac{{\sum}_{i\in V}\frac{1}{\deg (i)}\cdot \left|{\sum\limits}_{h^{\prime}: v\text{ input}}\gamma(h^{\prime})-{\sum\limits}_{h^{\prime\prime}: v\text{ output}}\gamma(h^{\prime\prime})\right|^{p}}{{\sum\limits}_{\hat{h}\in H}|\gamma(\hat{h})|^{p}} \right)\\ &=& \frac{\partial_{h}\left( {\sum\limits}_{i\in V}\frac{1}{\deg (i)}\cdot \left|\sum\limits_{h^{\prime}: v\text{ input}}\gamma(h^{\prime})-\sum\limits_{h^{\prime\prime}: v\text{ output}}\gamma(h^{\prime\prime})\right|^{p}\right)-\text{RQ}_{p}(\gamma)\cdot \partial_{h}\left( \sum\limits_{\hat{h}\in H}|\gamma(\hat{h})|^{p}\right)}{{\sum}_{\hat{h}\in H}|\gamma(\hat{h})|^{p}} \\ &=&\frac{p\cdot {{\Delta}_{p}^{H}}\gamma(h)-\text{RQ}_{p}(\gamma)\cdot p\cdot \left( |\gamma(h)|^{p-2}\gamma(h)\right)}{{\sum\limits}_{\hat{h}\in H}|\gamma(\hat{h})|^{p}}. \end{array} $$

Therefore,

$$ \begin{array}{@{}rcl@{}} \nabla \text{RQ}_{p}(\gamma)=0~&\Longleftrightarrow&~ \partial_{h}\text{RQ}_{p}(\gamma)=0\quad\forall h \\ &\Longleftrightarrow&~ {{\Delta}_{p}^{H}}\gamma=\text{RQ}_{p}(\gamma)|\gamma|^{p-2}\gamma\\ &\Longleftrightarrow&~ \gamma \text{ is an eigenfunction for }{{\Delta}^{H}_{p}} \text{ with eigenvalue }\text{RQ}_{p}(\gamma). \end{array} $$

This proves the first implication for \({{\Delta }^{H}_{p}}\). The inverse implication is analogous to the case of Δp. □

Corollary 2.9

For all p > 1,

$$ \min_{f\in C(V)}\text{RQ}_{p}(f) \qquad \left( \text{resp. } \max_{f\in C(V)}\text{RQ}_{p}(f)\right) $$
(13)

is the smallest (resp. largest) eigenvalue of Δp, and f realizing (13) is a corresponding eigenfunction.

Analogously,

$$ \min_{\gamma\in C(H)}\text{RQ}_{p}(\gamma) \qquad \left( \text{resp. } \max_{\gamma\in C(H)}\text{RQ}_{p}(\gamma)\right) $$
(14)

is the smallest (resp. largest) eigenvalue of \({{\Delta }^{H}_{p}}\), and γ realizing (14) is a corresponding eigenfunction.

Proof

By Fermat’s theorem, if f≠ 0 minimizes or maximizes RQp over \(\mathbb {R}^{n}\setminus 0\), then ∇RQp(f) = 0. The claim for Δp then follows by Theorem 2.8, and the case of \({{\Delta }^{H}_{p}}\) is analogous. □

We now give a preliminary definition, before stating the generalized min-max principle.

Definition 2.10

For a centrally symmetric set S in \(\mathbb {R}^{n}\), its Krasnoselskii \(\mathbb {Z}_{2}\) genus is defined as

$$ \text{gen}(S) := \left\{\begin{array}{ll} \min\left\{k\in\mathbb{Z}^{+}: \exists \text{odd continuous } h: S\setminus 0\to \mathbb{S}^{k-1}\right\} &\quad \text{if }~S\setminus 0\ne\emptyset,\\ 0 &\quad \text{if }S\setminus 0=\emptyset. \end{array}\right. $$

For each k ≥ 1, we let \(\text {Gen}_{k}:=\{ S\subset \mathbb {R}^{n}: S\text { centrally symmetric with } \text {gen}(S)\ge k\}\).

Remark 2.11

From the above definition we get an inclusion chain

$$ \text{Gen}_{1}\supset \text{Gen}_{2}\supset\cdots\text{Gen}_{n}\supset \emptyset= \text{Gen}_{n+1}=\cdots=\emptyset. $$

Therefore, the Krasnoselskii \(\mathbb {Z}_{2}\) genus gives a graded index of the family of all centrally symmetric sets with center at 0 in Rn, which generalizes the (linear) dimension of subspaces.

Theorem 2.12

(Generalized min-max principle) Let \(p\in \mathbb {R}_{>1}\). For k = 1,…,n, the constants

$$ \lambda_{k}({\Delta}_{p}) := \inf_{S\in\text{Gen}_{k}} \sup_{f\in S\setminus 0} \text{RQ}_{p}(f) $$
(15)

are eigenvalues of Δp. They satisfy

$$ \lambda_{1}\le \cdots\leq \lambda_{n} $$

and, if λ = λk+ 1 = ⋯ = λk+l for 0 ≤ k < k + ln, then

$$ \text{gen}(\{\text{eigenfunctions corresponding to }\lambda\})\ge l. $$

The same holds for the constants

$$ \mu_{k}\left( {{\Delta}^{H}_{p}}\right) := \inf_{S\in\text{Gen}_{k}} \sup_{f\in S\setminus 0} \text{RQ}_{p}(\gamma), \qquad k=1,\ldots,m, $$

that are eigenvalues of \({{\Delta }^{H}_{p}}\).

Proof

By Theorem 2.8, in order to prove the claim for Δp it suffices to show that λkp) defined in (15) is a critical value of RQp. Let

$$ \|f\|_{p}:=\left( \sum\limits_{i\in V}\deg(i)|f(i)|^{p}\right)^{\frac1p} $$

be the p-norm with weights given by the degrees, and let

$$ E_{p}(f):=\sum\limits_{h\in H} \left|\sum\limits_{j\in h_{in}} f(j)-\sum\limits_{j^{\prime}\in h_{out}} f(j^{\prime})\right|^{p}. $$

Then, \(\text {RQ}_{p}(f)=E_{p}(\frac {f}{\|f\|_{p}})\). Now, consider the lp-sphere \(S_{p}=\{f\in \mathbb {R}^{n}:\|f\|_{p}=1\}\). We have that

$$ \sup_{f\in S\setminus 0}\text{RQ}_{p}(f)=\sup_{f\in \mathrm{R}_{+}S\setminus 0}\text{RQ}_{p}(f)=\sup_{f\in \mathrm{R}_{+}S\cap S_{p}}E_{p}(f), $$

where R+S := {cg : gS,c > 0}. Therefore, it can be verified that

$$ \lambda_{k}({\Delta}_{p})=\inf_{S\subset S_{p},S\in \text{Gen}_{k},f\in S}E_{p}(f). $$

From the Liusternik–Schnirelmann Theorem applied to the smooth function Ep restricted to the smooth lp-sphere Sp it follows that such a min-max quantity must be an eigenvalue of Ep on Sp. This proves the claim for Δp. The case of \({{\Delta }_{p}^{H}}\) is similar, if we consider

$$ \begin{array}{@{}rcl@{}} \|\gamma\|_{p}&:=&\left( \sum\limits_{h\in H}|\gamma(h)|^{p}\right)^{\frac1p},\\ E_{p}(\gamma)&:=&\sum\limits_{i\in V}\frac{1}{\deg (i)}\left|\sum\limits_{h^{\prime}\ni i\text{ as input }}\gamma(h^{\prime})-\sum\limits_{h^{\prime\prime}\ni i\text{ as output }}\gamma(h^{\prime\prime})\right|^{p} \end{array} $$

and \(S_{p}:=\{\gamma \in \mathbb {R}^{m}:\|\gamma \|_{p}=1\}\). □

Remark 2.13

For the case of p = 2, a linear subspace X in \(\mathbb {R}^{n}\) with \(\dim X=k\) satisfies gen(X) = k and by considering the sub-family

$$ \widetilde{\text{Gen}}_{k}:=\{\text{linear subspace with dimension at least }k\}\subset \text{Gen}_{k} $$

we have

$$ \lambda_{k}({\Delta}_{2}) = \inf_{S\in\text{Gen}_{k}} \sup_{f\in S\setminus 0} \text{RQ}_{2}(f)=\inf_{S\in\widetilde{\text{Gen}}_{k}} \sup_{f\in S\setminus 0} \text{RQ}_{2}(f). $$

This coincides with the Courant–Fischer–Weyl min-max principle. On the other hand, for p > 1, we only know that

$$ \lambda_{k}({\Delta}_{p}) = \inf_{S\in\text{Gen}_{k}}\sup_{f\in S\setminus 0} \text{RQ}_{p}(f)\le \inf_{S\in\widetilde{\text{Gen}}_{k}}\sup_{f\in S\setminus 0} \text{RQ}_{p}(f). $$

In particular, while for p = 2 we know that the n eigenvalues of Δp (resp. the m eigenvalues of \({{\Delta }_{p}^{H}}\) appearing in Theorem 2.12) are all the eigenvalues of Δp (resp. \({{\Delta }_{p}^{H}}\)), we don’t know whether Δp and \({{\Delta }_{p}^{H}}\) have also more eigenvalues, for p≠ 2. This is still an open question also for the graph case. In other words, we don’t know whether all eigenvalues of Δp and \({{\Delta }_{p}^{H}}\) can be written in the min-max Rayleigh Quotient form.

Conjecture 1

For \(1<p<\infty \), all eigenvalues of Δp are min-max eigenvalues.

We formulate this conjecture, because for the p-Laplacian on domains and manifolds as well as on graphs, it is an open problem whether all the eigenvalues of the p-Laplacian are of the min-max form (see [3, 6, 13] and [40]). Thus, as far as we know, Conjecture 1 is open in both the continuous and the discrete setting.

Throughout the paper, given p > 1 we shall denote by

$$ \lambda_{1}\leq\cdots\leq \lambda_{n} \qquad \text{and}\qquad \mu_{1}\leq\cdots\leq \mu_{m} $$

the eigenvalues of Δp and \({{\Delta }^{H}_{p}}\), respectively, which are described in Theorem 2.12. We shall call them the min-max eigenvalues. Note that, although we cannot say a priori whether these are all the eigenvalues of the p-Laplacians, in view of Corollary 2.9 we can always say that

$$ \begin{array}{@{}rcl@{}} \lambda_{1}&=&\min_{f\in C(V)}\text{RQ}_{p}(f),\quad \lambda_{n}=\max_{f\in C(V)}\text{RQ}_{p}(f),\\ \mu_{1}&=&\min_{\gamma\in C(H)}\text{RQ}_{p}(\gamma), \quad \mu_{m}=\max_{\gamma\in C(H)}\text{RQ}_{p}(\gamma). \end{array} $$

1-Laplacians

In this section we generalize the well known 1-Laplacian for graphs [7, 8, 17] to the case of hypergraphs.

Definition 3.1

The 1-Laplacian is the set-valued operator such that, given fC(V ),

$$ {\Delta}_{1} f:=\left\{\sum\limits_{i\in V}\frac{1}{\deg(i)} \sum\limits_{h\ni i} z_{ih}\vec{e}_{i}~\left|~z_{ih}\in \text{Sgn}\left( \sum\limits_{j\in h,o_{h}(i,j)=-1}f(j)-\sum\limits_{j^{\prime}\in h,o_{h}(i,j^{\prime})=1}f(j^{\prime})\right),z_{ih}=o_{h}(i,j)z_{jh}\right.\right\}, $$

where e1,…,en is the orthonormal basis of \(\mathbb {R}^{n}\) and

$$ \text{Sgn}(t):=\left\{\begin{array}{ll} \{1\} &\quad \text{if } t>0,\\ {[}-1,1{]} &\quad \text{if }t=0,\\ \{-1\} &\quad \text{if }t<0. \end{array}\right. $$

Analogously, the hyperedge 1-Laplacian for functions γC(H) is

$$ {{\Delta}^{H}_{1}} \gamma:=\left\{\sum\limits_{h\in H} \sum\limits_{i\in h} \frac{1}{\deg (i)} z_{ih}{\mathbf{e}}_{h}~\left|~z_{ih}\in \text{Sgn}\!\!\left( \sum\limits_{h^{\prime}\ni i, o_{i}(h,h^{\prime})=-1}\!\!\!\gamma(h^{\prime})-\sum\limits_{h^{\prime\prime}\ni i, o_{i}(h,h^{\prime\prime})=1}\!\!\!\gamma(h^{\prime\prime})\!\right),z_{ih}=o_{i}(h,h^{\prime})z_{ih^{\prime}}\right.\!\!\right\} $$

where \(\mathbf {e}_{h_{1}},\ldots ,\mathbf {e}_{h_{m}}\) is the orthonormal basis of \(\mathbb {R}^{m}\).

For any fC(V ), Δ1f is a compact convex set in \(C(V)\cong \mathbb {R}^{n}\), as well as

$$ \text{Sgn}(f):=\{g\in C(V): g(i)\in \text{Sgn}(f(i))~\forall i\}. $$

Remark 3.2

The 1-Laplacian is the limit of the p-Laplacian with respect to the set-valued upper limit, i.e.

$$ {\Delta}_{1}f= \limsup_{p\to1^{+},\delta\to 0^{+}}{\Delta}_{p}(\mathbb{B}_{\delta}(f)) = \lim_{\delta\to 0^{+}}\lim\limits_{p\to1^{+}}\text{conv}({\Delta}_{p}(\mathbb{B}_{\delta}(f))), $$

where \(\mathbb {B}_{\delta }(f)\) is the ball with radius δ and center f. In other words, Δ1f is the set of limit points of \({\Delta }_{p}f^{\prime }\) when p → 1 and \(f^{\prime }\to f\). On the one hand, if f is such that \({\sum }_{i\in h_{in}}f(i)\ne {\sum }_{i\in h_{out}}f(i)\) for all hH, then \({\Delta }_{1}f=\lim _{p\to 1^{+}}{\Delta }_{p}f\) in the classical sense. On the other hand, for a general fC(V ), the limit may not exist. To some extent, the set-valued upper limit ensures the upper semi-continuity of the family of p-Laplacians, that is, the set-valued mapping \([1,\infty )\times C(V)\ni (p,f)\mapsto {\Delta }_{p}f\in C(V)\) is upper semi-continuous.

Definition 3.3

The eigenvalue problem of Δ1 is to find the eigenpair (λ,f) such that

$$ {\Delta}_{1} f\bigcap \lambda \text{Sgn}(f)\ne\emptyset $$

or equivalently, in terms of Minkowski summation,

$$ 0\in {\Delta}_{1} f- \lambda \text{Sgn}(f). $$

In coordinate form it means that there exist

$$ z_{ih}\in \text{Sgn}\left( \sum\limits_{j\in h, o_{h}(i,j)=-1}f(j)-\sum\limits_{j^{\prime}\in h, o_{h}(i,j^{\prime})=1}f(j^{\prime})\right) $$

with zih = oh(i,j)zjh for i,jh, and zi ∈Sgn(f(i)) such that

$$ \sum\limits_{h\ni i}z_{ih}=\lambda \deg(i) z_{i},\quad \forall i\in V. $$

Remark 3.4

A shorter coordinate form of the eigenvalue problem for the 1-Laplacian is

$$ \begin{array}{@{}rcl@{}} &&\exists z_{i}\in \text{Sgn}(f(i))\quad\text{ and }\quad z_{h}\in \text{Sgn}\left( \sum\limits_{i\in h_{in}}f(i)-\sum\limits_{i\in h_{out}}f(i)\right)\quad\text{ s.t.} \\ && \sum\limits_{h_{in}\ni i}z_{h}-\sum\limits_{h_{out}\ni i}z_{h}=\lambda \deg(i) z_{i},\quad \forall i\in V. \end{array} $$

Observe also that \(({\sum \limits }_{i\in h_{in}}f(i)-{\sum \limits }_{i\in h_{out}}f(i))z_{h}=|{\sum \limits }_{i\in h_{in}}f(i)-{\sum \limits }_{i\in h_{out}}f(i)|\) and f(i)zi = |f(i)|, for all hH and for all iV.

The eigenvalue problem of \({{\Delta }_{1}^{H}}\) can be defined in an analogous way. In particular, all results shown in this section for Δ1 also hold for \({{\Delta }_{1}^{H}}\). Without loss of generality, we only prove them for Δ1.

Definition 3.5

For the generalized Rayleigh QuotientRQ1 (cf. Definition 2.6), its Clarke derivative at fC(V ) is

$$ \nabla \text{RQ}_{1}(f):=\left\{\xi\in C(V)~\left|~\limsup_{g\to f, t\to 0^{+}}\frac{\text{RQ}_{1}(g+t\eta)-\text{RQ}_{1}(g)}{t}\ge \langle \xi,\eta\rangle,~\forall \eta\in C(V)\right.\right\}. $$

This is a compact convex set in C(V ).

Remark 3.6

Clarke introduced such a derivative for locally Lipschitz functions, in the field of nonsmooth optimization [9, 10]. Clearly, RQ1 is not smooth, but it is piecewise smooth (therefore locally Lipschitz) on \(\mathbb {R}^{n}\setminus 0\). Hence, the Clarke derivative for RQ1 is well defined. Also, since the Clarke derivative coincides with the usual derivative for smooth functions, we choose to denote it by ∇ also for locally Lipschitz functions.

Definition 3.7

Given fC(V ), let

$$ E_{1}(f):=\sum\limits_{h\in H}\left|\sum\limits_{i\in h_{in}}f(i)-\sum\limits_{i\in h_{out}}f(i)\right| \qquad\text{and}\qquad \|f\|_{1}:=\sum\limits_{i\in V} \deg(i)|f(i)|. $$

Proposition 3.8

For all iV,

$$ (\nabla E_{1}(f))(i)=\deg(i){\Delta}_{1}f(i)\qquad\text{and}\qquad (\nabla\|f\|_{1})(i)=\deg(i)\text{Sgn}(f(i)). $$

Proof

Note that the Clarke derivative of the function \(\mathbb {R}\ni t\mapsto |t|\) is Sgn(t). Hence, by the chain rule in nonsmooth analysis, for \(a_{1},\ldots ,a_{k}\in \mathbb {R}\),

$$ \nabla_{t_{1},\ldots,t_{k}} |a_{1}t_{1}+\ldots+a_{k}t_{k}|=\left\{(a_{1}s,\ldots,a_{k}s)\in\mathbb{R}^{k}:s\in \text{Sgn}(a_{1}t_{1}+\ldots+a_{k}t_{k})\right\}. $$

Finally, applying the additivity of Clarke’s derivative, we derive the desired identities. □

Theorem 3.9

(Min-max principle for the 1-Laplacian) If f is a critical point of the function RQ1, i.e. 0 ∈∇RQ1(f), then f is an eigenfunction and RQ1(f) is the corresponding eigenvalue of Δ1. A function fC(V ) ∖ 0 is a maximum (resp. minimum) eigenfunction of Δ1 if and only if it is a maximizer (resp. minimizer) of RQ1; λ is the largest (resp. smallest) eigenvalue of Δ1 if and only if it is the maximum (resp. minimum) value of RQ1.

Also, the constants

$$ \lambda_{k}({\Delta}_{1}) := \inf_{S\in\text{Gen}_{k}}\sup_{f\in S\setminus 0} \text{RQ}_{1}(f) $$
(16)

are eigenvalues of Δ1. Furthermore, \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})= \lambda _{k}({\Delta }_{1})\), and any limit point of {fk,p}p> 1 is an eigenfunction of Δ1 w.r.t. λk1), where fk,p is an eigenfunctionFootnote 1 of λkp), ∀k = 1,…,n. Besides, if \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})=\lim _{p\to 1^{+}} \lambda _{k+l}({\Delta }_{p})\) for some \(k,l\in \mathbb {N}_{+}\), then λk1) has the multiplicity at least l + 1.

Proof

The proof is based on the theory of Clarke derivative, established in [10].

Let f be a critical point of the function RQ1. By the chain rule for the Clarke derivative,

$$ \begin{array}{@{}rcl@{}} &&0\in \nabla \text{RQ}_{1}(f)\subset \frac{\nabla E_{1}(f)-\text{RQ}_{1}(f)\nabla \|f\|_{1}}{\|f\|_{1}}\\ \Longrightarrow\quad&&0\in \nabla E_{1}(f)-\text{RQ}_{1}(f)\nabla \|f\|_{1}\\ \Longleftrightarrow\quad&&0\in{\Delta}_{1}f-\text{RQ}_{1}(f)\text{Sgn}(f)f. \end{array} $$

Therefore, f is an eigenfunction of Δ1, and RQ1(f) is the corresponding eigenvalue. Also, again by the basic results on Clarke derivative, if f is a maximizer (minimizer) of RQ1, then 0 ∈∇RQ1(f). Hence, 0 ∈Δ1f −RQ1(f)Sgn(f). Thus, f is an eigenfunction, and RQ1(f) is a corresponding eigenvalue.

Now, if f is an eigenfunction corresponding to an eigenvalue λ, i.e. 0 ∈Δ1fλ Sgn(f) or equivalently

$$ 0\in \nabla E_{1}(f)-\lambda\nabla \|f\|_{1}, $$
(17)

then by the Euler identity for one-homogeneous Lipschitz functions,

$$ \langle g,f\rangle=E_{1}(f) \quad \forall g\in \nabla E_{1}(f). $$

Therefore, by (17), we get that 0 = E1(f) − λf1, which implies λ = RQ1(f). Hence, the maximum (resp. the minimum) of RQ1 is the largest (resp. smallest) eigenvalue of Δ1.

The min-max principle (16) is a consequence of the nonsmooth version of the Liusternik–Schnirelmann Theorem [11], and thus we omit the details of the proof.

The convergence property \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})= \lambda _{k}({\Delta }_{1})\) is a consequence of the result on Gamma-convergence of minimax values [12].

Now, without loss of generality, we may assume that fk,pf, p → 1+. Then, according to Remark 3.2, \(\lim _{p\to 1^{+}}{\Delta }_{p}f_{k,p}\in {\Delta }_{1} f_{\ast }\). Similarly, |fk,p(i)|p− 2fk,p(i) →sign(f(i)) as p tends to 1+. By taking p → 1+ in the equality

$$ 0={\Delta}_{p}f_{k,p}(i)-\lambda_{k}({\Delta}_{p})|f_{k,p}(i)|^{p-2}f_{k,p}(i)\quad\forall i\in V $$

we get

$$ 0=\lim_{p\to1^{+}}{\Delta}_{p}f_{k,p}(i)-\lambda_{k}({\Delta}_{i})\text{sign}(f_{\ast}(i))\in {\Delta}_{1} f_{\ast}(i)-\lambda_{k}({\Delta}_{i})\text{Sgn}(f_{\ast}(i))\quad\forall i\in V, $$

which means that f is an eigenfunction of Δ1.

The condition \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})=\lim _{p\to 1^{+}} \lambda _{k+l}({\Delta }_{p})\) implies λk1) = λk+ 11) = ⋯ = λk+l1), which derives that λk1) has the multiplicity at least (l + 1) according to the Liusternik–Schnirelmann Theory. This completes the proof. □

Analogously to the case of p > 1, also for p = 1 we shall denote by

$$ \lambda_{1}\leq\cdots\leq \lambda_{n} \qquad \text{and}\qquad \mu_{1}\leq\cdots\leq \mu_{m} $$

the eigenvalues of Δ1 that are described in Theorem 2.12 and the analogous eigenvalues of \({{\Delta }_{1}^{H}}\) that can be obtained in the same way. Also in this case, as well as for p > 1, we can always say that

$$ \begin{array}{@{}rcl@{}} &&\lambda_{1}=\min_{f\in C(V)}\text{RQ}_{1}(f),\quad \lambda_{n}=\max_{f\in C(V)}\text{RQ}_{1}(f),\\ &&\mu_{1}=\min_{\gamma\in C(H)}\text{RQ}_{1}(\gamma), \quad \mu_{m}=\max_{\gamma\in C(H)}\text{RQ}_{1}(\gamma). \end{array} $$

Remark 3.10

In contrast to the case of the p-Laplacian for p > 1, the converse of Theorem 3.9 is not true, that is, there exist eigenfunctions f of Δ1 that are not a critical points of RQ1. However, showing this requires a long argument that we bring forward in [23]. In [23] we also show, furthermore, that Conjecture 1 cannot hold for Δ1. (We had already noted in Section 1.1 that this is also a subtle issue in the continuous case).

Smallest and Largest Eigenvalues

In [31], it has been proved that

$$ \max_{\gamma\in C(H)}\text{RQ}_{1}(\gamma)=\max_{h\in H}\sum\limits_{i\in h}\frac{1}{\deg(i)}. $$
(18)

Hence, we can characterize the maximal eigenvalue of \({{\Delta }^{H}_{1}}\) in virtue of a combinatorial quantity. In this section, we investigate further properties of both the largest and the smallest eigenvalues of the p-Laplacians, for general p.

Lemma 4.1

For all p, λ1 ≤ 1 ≤ λn.

Proof

Let \(\tilde {f}:V\rightarrow \mathbb {R}\) that is 1 on a fixed vertex and 0 on all other vertices. Then, for all p, \(\text {RQ}_{p}(\tilde {f})=1\). Therefore,

$$ \lambda_{1} =\min_{f\in C(V)}\text{RQ}_{p}(f)\leq \text{RQ}_{p}(\tilde{f})=1 \leq \max_{f\in C(V)}\text{RQ}_{p}(f)=\lambda_{n}. $$

Lemma 4.2

For p = 1 and for all hypergraphs, λn = 1.

Proof

We generalize the proof of [22, Lemma 8]. Let \(\hat {f}:V\rightarrow \mathbb {R}\) be a maximizer of

$$ \frac{{\sum\limits}_{h\in H}\left|{\sum\limits}_{i\text{ input of }h}f(i)-{\sum\limits}_{j\text{ output of }h}f(j)\right|}{{\sum\limits}_{i\in V}\deg(i)|f(i)|} $$

and assume, without loss of generality, that \({\sum \limits }_{i\in V}\deg (i)|\hat {f}(i)|=1\). Then,

$$ \begin{array}{@{}rcl@{}} \lambda_{n}&=& \max_{f:V\rightarrow\mathbb{R}}\frac{{\sum\limits}_{h\in H}\left|{\sum\limits}_{i\text{ input of }h}f(i)-{\sum\limits}_{j\text{ output of }h}f(j)\right|}{{\sum\limits}_{i\in V}\deg(i)|f(i)|} \\ &=&\sum\limits_{h\in H}\left|\sum\limits_{i\text{ input of }h}\hat{f}(i)-\sum\limits_{j\text{ output of }h}\hat{f}(j)\right|\\ &\leq& \sum\limits_{h\in H}\sum\limits_{i\in h}|\hat{f}(i)|\\ &=&\sum\limits_{i\in V}\deg(i)\cdot \bigl|\hat{f}(i)\bigr|\\ &=&1. \end{array} $$

The inverse inequality follows by Lemma 4.1. □

Remark 4.3

If we compare (18) and Lemma 4.2 we can see that, while for p = 2, i.e. in the case of the usual hypergraph Laplacian, μm = λn and μ1 = λ1, this is not necessarily true for all p.

Lemma 4.4

For all p,

$$ \mu_{1}\leq \min_{h\in H}\sum\limits_{i\in h}\frac{1}{\deg(i)}\leq \max_{h\in H}\sum\limits_{i\in h}\frac{1}{\deg(i)}\leq \mu_{m}. $$

Proof

Let \(\tilde {\gamma }:H\rightarrow \mathbb {R}\) that is 1 on a fixed hyperedge h and 0 on all other hyperedges. Then, for all p,

$$ \text{RQ}_{p}(\tilde{\gamma})=\sum\limits_{i\in h}\frac{1}{\deg(i)}. $$

Therefore,

$$ \mu_{1}= \min_{\gamma\in C(H)}\text{RQ}_{p}(\gamma)\leq \text{RQ}_{p}(\tilde{\gamma}) =\sum\limits_{i\in h}\frac{1}{\deg(i)}\leq \max_{\gamma\in C(H)}\text{RQ}_{p}(\gamma)=\mu_{m}. $$

Since this is true for all h, this proves the claim. □

Nodal Domain Theorems

In [33], the authors prove two nodal domain theorems for Δ2. In this section, we establish similar results for Δp, for all p ≥ 1. Before, we recall the definitions of nodal domains for oriented hypergraphs. We refer the reader to [4] for nodal domain theorems on graphs.

Definition 5.1

([33]) Given a function \(f:V\to \mathbb {R}\), we let supp(f) := {iV : f(i)≠ 0} be the support set of f. A nodal domain of f is a connected component of

$$ H\cap \text{supp}(f):=\left\{h^{\prime}=(h_{in}\cap \text{supp}(f), h_{out}\cap \text{supp}(f)): h\in H\right\}. $$

Similarly, we let supp±(f) := {iV : ±f(i) > 0}. A positive nodal domain of f is a connected component of

$$ H\cap \text{supp}_{+}(f):=\left\{h^{\prime}=(h_{in}\cap \text{supp}_{+}(f), h_{out}\cap \text{supp}_{+}(f)): h\in H\right\}. $$

A negative nodal domain of f is a connected component of H ∩supp(f).

Signless Nodal Domain

Definition 5.2

We say an eigenvalue λ of Δp has multiplicity r if gen{eigenfunctions w.r.t. λ} = r.

Theorem 5.3

If f is an eigenfunction of the k-th min-max eigenvalue λkp) and this has multiplicity r, then the number of nodal domains of f is smaller than or equal to k + r − 1.

Proof

Suppose the contrary, that is, f is an eigenfunction of λk with multiplicity r, and f has at least k + r nodal domains which are denoted by V1,…,Vk+r. For simplicity, we assume that

$$ \lambda_{1}\le \cdots\le \lambda_{k}=\lambda_{k+1}=\cdots=\lambda_{k+r-1}<\lambda_{k+r} \le\cdots\le \lambda_{n}. $$

Consider a linear function-space X spanned by \(f|_{V_{1}},\ldots ,f|_{V_{k+r}}\), where the restriction \(f|_{V_{i}}\) is defined by

$$ f|_{V_{i}}(j)=\left\{\begin{array}{ll} f(j)&\quad\text{ if }j\in V_{i},\\ 0&\quad\text{ if } j\not\in V_{i}. \end{array}\right. $$

Since V1,…,Vk+r are pairwise disjoint, \(\dim X=k+r\). Given gX ∖ 0, there exists (t1,…,tk+r)≠0 such that

$$ g={\sum}_{i=1}^{k+r} t_{i} f|_{V_{i}}. $$

It is clear that \(\|g\|_{p}^{p}={\sum }_{i=1}^{k+r} |t_{i}|^{p}\|f|_{V_{i}}\|_{p}^{p}\). By the definition of nodal domain, each hyperedge h intersects with at most one Vi ∈{V1,…,Vk+r}, which implies that \(E_{p}(g)={\sum }_{i=1}^{k+r} |t_{i}|^{p} E_{p}(f|_{V_{i}})\). Finally, we note that for p > 1,

$$ \begin{array}{@{}rcl@{}} && \sum\limits_{h\in H: h\cap V_{l}\ne\emptyset}\left|\sum\limits_{i\in h_{in}}f|_{V_{l}}(i)-\sum\limits_{j\in h_{out}}f|_{V_{l}}(j)\right|^{p} = \sum\limits_{h\in H: h\cap V_{l}\ne\emptyset}\left|\sum\limits_{i\in h_{in}}f(i)-\sum\limits_{j\in h_{out}}f(j)\right|^{p}\\ &&= \sum\limits_{i\in V_{l}}f(i)\sum\limits_{h\ni i}\left|\sum\limits_{h_{in}}f(i)-\sum\limits_{h_{out}}f(j)\right|^{p-2}\left( \sum\limits_{j\in h, o_{h}(i,j)=-1}f(j)-\sum\limits_{j^{\prime}\in h,o_{h}(i,j^{\prime})=1}f(j^{\prime})\right)\\ &&= \sum\limits_{i\in V_{l}}f(i) \lambda_{k}\deg(i)|f(i)|^{p-2}f(i) = \lambda_{k}\sum\limits_{i\in V_{l}}\deg(i)|f|_{V_{l}}(i)|^{p}=\lambda_{k}\|f|_{V_{l}}\|_{p}^{p}, \end{array} $$

which implies that \(E_{p}(f|_{V_{l}})=\lambda _{k}\|f|_{V_{l}}\|_{p}^{p}\). For the case of p = 1, we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{h\in H: h\cap V_{l}\ne\emptyset}\left|\sum\limits_{i\in h_{in}}f|_{V_{l}}(i)-\sum\limits_{j\in h_{out}}f|_{V_{l}}(j)\right| &=& \sum\limits_{h\in H: h\cap V_{l}\ne\emptyset}\left|\sum\limits_{i\in h_{in}}f(i)-\sum\limits_{j\in h_{out}}f(j)\right|\\ &=& \sum\limits_{h\in H: h\cap V_{l}\ne\emptyset}z_{h}\left( \sum\limits_{i\in h_{in}}f(i)-\sum\limits_{j\in h_{out}}f(j)\right)\\ &=& \sum\limits_{i\in V_{l}}f(i)\left( \sum\limits_{h_{in}\ni i}z_{h}-\sum\limits_{h_{out}\ni i}z_{h}\right)\\ &=& \sum\limits_{i\in V_{l}}f(i) \lambda_{k}\deg(i)z_{i}\\ &=& \lambda_{k}\sum\limits_{i\in V_{l}}\deg(i)|f|_{V_{l}}(i)|\\ &=&\lambda_{k}\|f|_{V_{l}}\|_{1}, \end{array} $$

in which the parameters \(z_{h}\in \text {Sgn}({\sum \limits }_{i\in h_{in}}f(i)-{\sum \limits }_{j\in h_{out}}f(j))\) and zi ∈Sgn(f(i)) (cf. Remark 3.4).

Therefore,

$$ \text{RQ}_{p}(g)=\frac{{\sum\limits}_{i=1}^{k+r} |t_{i}|^{p} E_{p}(f|_{V_{i}})}{{\sum\limits}_{i=1}^{k+r} |t_{i}|^{p}\|f|_{V_{i}}\|_{p}^{p}}=\lambda_{k}. $$

By the min-max principle for Δp,

$$ \begin{array}{@{}rcl@{}} \lambda_{k+r}&=&\min_{X^{\prime}\in\text{Gen}_{k+r}}\max_{g^{\prime}\in X^{\prime}\setminus0} \text{RQ}_{p}(g^{\prime})\\ &\le& \max_{g\in X\setminus0}\text{RQ}_{p}(g)\\ &=&\lambda_{k}, \end{array} $$

which leads to a contradiction. □

Positive and Negative Nodal Domain Theorem

In this section, we show a new Courant nodal domain theorem for oriented hypergraphs with only inputs. Note that Theorem 5.3 does not hold if we replace “nodal domains” by “positive and negative nodal domains”. In fact, for the connected hypergraph Γk := (V,Ek) with V := {1,…,n} and

$$ E_{k}:=\{\{i,j\}:i\le k\text{ and }j\ge k+1, \text{ or vice versa}\} $$

in which we suppose that there are only inputs, the number of positive and negative nodal domains of the first eigenfunction w.r.t. λ1 = 0 is n.

Theorem 5.4

Let Γ = (V,H) be an oriented hypergraph with only inputs. If f is an eigenfunction of the k-th min-max eigenvalue λk and this has multiplicity r, then the number of nodal domains of f is smaller than or equal to nk + r.

Proof

Suppose the contrary, that is, f is an eigenfunction of λk with multiplicity r, and f has at least nk + r + 1 nodal domains which are denoted by V1,…,Vnk+r+ 1. Consider a linear function-space X spanned by \(f|_{V_{1}},\ldots ,f|_{V_{n-k+r+1}}\), where the restriction \(f|_{V_{i}}\) is defined by

$$ f|_{V_{i}}(j)=\left\{\begin{array}{ll} f(j)&\quad\text{ if }j\in V_{i},\\ 0&\quad\text{ if } j\not\in V_{i}. \end{array}\right. $$

Since V1,…,Vnk+r+ 1 are pairwise disjoint, \(\dim X=n-k+r+1\). For gX ∖ 0, there exists (t1,…,tnk+r+ 1)≠0 such that \(g={\sum \limits }_{i=1}^{n-k+r+1} t_{i} f|_{V_{i}}\). By the definition of positive and negative nodal domains, each hyperedge h intersects at most one positive nodal domain and at most one negative nodal domain. Thus, for \(l\ne l^{\prime }\) and hH, \(\left ({\sum \limits }_{i\in h_{in}}f|_{V_{l}}(i)\right )\cdot \left ({\sum \limits }_{i\in h_{in}}f|_{V_{l^{\prime }}}(i)\right )\le 0\).

Now, with a little abuse of notation we let h = hin. For p > 1, we have that

$$ \begin{array}{@{}rcl@{}} \sum\limits_{h\in H}\left|\sum\limits_{i\in h}g(i)\right|^{p}&=& \sum\limits_{h\in H}\left|\sum\limits_{i\in h}\sum\limits_{l=1}^{n-k+r+1}t_{l}f|_{V_{l}}(i)\right|^{p}\\ &=&\sum\limits_{h\in H}\left|\sum\limits_{l=1}^{n-k+r+1}t_{l}\left( \sum\limits_{i\in h}f|_{V_{l}}(i)\right)\right|^{p}\\ &\ge& \sum\limits_{h\in H}\sum\limits_{l=1}^{n-k+r+1}|t_{l}|^{p}\left( \sum\limits_{i\in h}f|_{V_{l}}(i)\right)\left|\sum\limits_{i\in h}f(i)\right|^{p-2}\sum\limits_{i\in h}f(i)\\ &=&\sum\limits_{l=1}^{n-k+r+1}|t_{l}|^{p}\sum\limits_{i\in V_{l}}f(i)\left( \sum\limits_{h\in H: h\ni i}\left|\sum\limits_{j^{\prime}\in h}f(j^{\prime})\right|^{p-2}\left( {\sum}_{j^{\prime}\in h}f(j^{\prime})\right)\right)\\ &=&\sum\limits_{l=1}^{n-k+r+1}|t_{l}|^{p}\sum\limits_{i\in V_{l}}f(i) \lambda_{k}\deg(i)|f(i)|^{p-2}f(i)\\ &=&\lambda_{k}\sum\limits_{l=1}^{n-k+r+1}|t_{l}|^{p}\sum\limits_{i\in V_{l}}\deg(i)|f(i)|^{p}\\ &=&\lambda_{k}\sum\limits_{i\in V}\deg(i)|g(i)|^{p}, \end{array} $$

where the inequality is deduced by taking \(A={\sum \limits }_{i\in h}f|_{V_{l}}(i)\) and \(B={\sum \limits }_{i\in h}f|_{V_{l}^{\prime }}(i)\) in the following lemma. Similarly, for p = 1 we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{h\in H}\left|\sum\limits_{i\in h}g(i)\right|&=& \sum\limits_{h\in H}\left|\sum\limits_{l=1}^{n-k+r+1}t_{l}\left( \sum\limits_{i\in h}f|_{V_{l}}(i)\right)\right|\\ &\ge& \sum\limits_{h\in H}\sum\limits_{l=1}^{n-k+r+1}|t_{l}|\left( \sum\limits_{i\in h}f|_{V_{l}}(i)\right)z_{h}\\ &=&\sum\limits_{l=1}^{n-k+r+1}|t_{l}|\sum\limits_{i\in V_{l}}f(i)\left( \sum\limits_{h\in H: h\ni i}z_{h}\right)\\ &=&\sum\limits_{l=1}^{n-k+r+1}|t_{l}|\sum\limits_{i\in V_{l}}f(i) \lambda_{k}\deg(i)z_{i}\\ &=&\lambda_{k}\sum\limits_{l=1}^{n-k+r+1}|t_{l}|\sum\limits_{i\in V_{l}}\deg(i)|f(i)|=\lambda_{k}\sum\limits_{i\in V}\deg(i)|g(i)|, \end{array} $$

where \(z_{h}\in \text {Sgn}({\sum \limits }_{i\in h}f(i))\) and zi ∈Sgn(f(i)).

Lemma 5.5

Let p ≥ 1, and let \(t,s,A,B\in \mathbb {R}\) with AB ≤ 0. Then,

$$ |tA+sB|^{p}\ge (|t|^{p}A+|s|^{p}B)|A+B|^{p-2}(A+B). $$
(19)

In the particular case of p = 1, we further have |tA + sB|≥ (|t|A + |s|B)z, ∀z ∈Sgn(A + B).

By Lemma 5.5, it follows that RQ(g) ≥ λk.

By the intersection property of \(\mathbb {Z}_{2}\)-genus, \(X^{\prime }\cap X\setminus \{0\}\ne \emptyset \) for any \(X^{\prime }\in \text {Gen}_{k-r}\). Therefore,

$$ \begin{array}{@{}rcl@{}} \lambda_{k-r}&=&\inf_{X^{\prime}\in\text{Gen}_{k-r}}\sup_{g^{\prime}\in X^{\prime}\setminus0}\text{RQ}(g^{\prime}) \ge \inf_{X^{\prime}\in\text{Gen}_{k-r}}\sup_{g^{\prime}\in X^{\prime}\cap X\setminus0}\text{RQ}(g^{\prime})\\ &\ge& \inf_{X^{\prime}\in\text{Gen}_{k-r}}\inf_{g^{\prime}\in X^{\prime}\cap X\setminus0}\text{RQ}(g^{\prime}) \ge \inf_{X^{\prime}\in\text{Gen}_{k-r}}\inf_{g^{\prime}\in X\setminus0}\text{RQ}(g^{\prime})\\ &=&\inf_{g\in X\setminus0}\text{RQ}(g) \ge\lambda_{k}. \end{array} $$

Together with λkr ≤… ≤ λk− 1λk, this implies that λkr = ⋯ = λk− 1 = λk, meaning that the multiplicity of λk is at least r + 1, which leads to a contradiction. □

It is only left to prove Lemma 5.5.

Proof

of Lemma 5.5 Without loss of generality, we may assume that A > 0 > B and \(A>B^{\prime }:=|B|\). In order to prove (19), it suffices to show that

$$ |tA-sB^{\prime}|^{p}\ge \left( |t|^{p}A-|s|^{p}B^{\prime}\right)(A-B^{\prime})^{p-1}, $$

that is,

$$ \left|t\frac{A}{A-B^{\prime}}-s\frac{B^{\prime}}{A-B^{\prime}}\right|^{p}\ge |t|^{p}\frac{A}{A-B^{\prime}}-|s|^{p}\frac{B^{\prime}}{A-B^{\prime}}. $$

By the convexity of the function t↦|t|p, we have

$$ \frac{A-B^{\prime}}{A}\left|t\frac{A}{A-B^{\prime}}-s\frac{B^{\prime}}{A-B^{\prime}}\right|^{p}+\frac{B^{\prime}}{A}|s|^{p}\ge |t|^{p}, $$

which proves (19). Now, in order to prove the stronger inequality for p = 1, since z = |A + B|(A + B)− 1 if A + B≠ 0, it suffices to focus on the case of A + B = 0. In this case, by \(|t-s|\ge \max \limits \{|t|-|s|,|s|-|t|\}\), we have |ts|≥ (|t|−|s|)z for any z ∈ [− 1,1]. Therefore, |tA + sB| = A|ts|≥ A(|t|−|s|)z = (|t|A + |s|B)z. The proof is completed. □

Smallest Nonzero Eigenvalue

In this section, we discuss the smallest nonzero eigenvalue \(\lambda _{\min \limits }\) of Δp, for p ≥ 1, as a continuation of Sections 5 and 6 in [33], which are focused on the easier study of \(\lambda _{\min \limits }\) for the 2-Laplacian. As in [33], we let \(\mathcal {I}^{h}:V\to \mathbb {R}\) and \(\mathcal {I}_{i}:H\to \mathbb {R}\) be defined by

$$ \mathcal{I}_{i}(h):= \mathcal{I}^{h}(i):=\left\{\begin{array}{ll} 1 &\quad \text{if }i\in h_{in},\\ -1 &\quad \text{if }i\in h_{out},\\ 0 &\quad \text{otherwise.} \end{array}\right. $$

Theorem 6.1

For p ≥ 1,

$$ \begin{array}{@{}rcl@{}} \lambda_{\min}&=&\min_{f\in\text{span}(\mathcal{I}^{h}:h\in H)}\frac{{\sum\limits}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{\min_{g\in \text{span}(\mathcal{I}^{h}:h\in H)^{\bot}}{\sum\limits}_{i\in V}\deg(i)|f(i)-g(i)|^{p}}=\lambda_{d+1},\\ \mu_{\min}&=&\min_{\gamma\in\text{span}(\mathcal{I}_{i}:i\in V)}\frac{{\sum\limits}_{i\in V}\frac{1}{\deg(i)}|\langle\mathcal{I}_{i},\gamma\rangle|^{p}}{\min_{\eta\in \text{span}(\mathcal{I}_{i}:i\in V)^{\bot}}{\sum\limits}_{h\in H} |\gamma(h)-\eta(h)|^{p}}=\mu_{d^{\prime}+1}, \end{array} $$
(20)

where \(d:=\dim \text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\) and \(d^{\prime }:=\dim \text {span}(\mathcal {I}_{i}:i\in V)^{\bot }\).

Remark 6.2

(20) above generalizes Equation (5) in [33]. In fact, for p = 2, by letting

$$ \sum\limits_{i\in V}\deg(i)|f(i)-\bar g(i)|^{2}:=\min_{g\in \text{span}(\mathcal{I}^{h}:h\in H)^{\bot}}\sum\limits_{i\in V}\deg(i)|f(i)-g(i)|^{2} $$

we have that \(\bar f:=f-\bar g\) is orthogonal to \(\text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\) with respect to the weighted scalar product \((f^{\prime },g^{\prime }):={\sum \limits }_{i\in V}\deg (i) f^{\prime }(i) g^{\prime }(i)\). Therefore,

$$ \begin{array}{@{}rcl@{}} \lambda_{\min}&=&\min_{f\in\text{span}(\mathcal{I}^{h}:h\in H)} \frac{{\sum\limits}_{h\in H}\langle\mathcal{I}^{h},f \rangle^{2}}{(\bar f,\bar f)}\\ &=&\min_{\bar f\in \text{span}\{D^{-1}\mathcal{I}^{h}:h\in H\}}\frac{{\sum\limits}_{h\in H}\langle\mathcal{I}^{h},\bar f \rangle^{2}}{(\bar f,\bar f)}\\ &=& \min_{f\in\text{span}\big\{D^{-\frac12}\mathcal{I}^{h}:h\in H\big\}}\frac{{\sum\limits}_{h\in H}\langle D^{-\frac12}\mathcal{I}^{h},f \rangle^{2}}{\langle f,f\rangle} \end{array} $$

and this coincides with Equation (5) in [33, Lemma 6.1].

Proof

of Theorem 6.1 Let \(X:=\text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\). We shall prove that

$$ \lambda_{\min}=\lambda_{d+1}=\tilde{\lambda}:=\min_{f\in X^{\bot}}\frac{{\sum}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{\min_{g\in X}{\sum\limits}_{i\in V}\deg(i)|f(i)-g(i)|^{p}}. $$

If d = 0, the claim is straightforward because in this case X = 0, \(X^{\bot }=\mathbb {R}^{n}\) and

$$ \lambda_{\min}=\min_{f\in \mathbb{R}^{n}}\frac{{\sum}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{ {\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}=\lambda_{1}. $$

Now, assume d ≥ 1. Since X ∈Gend and RQp(f) = 0 for all fX, we have λ1 = ⋯ = λd = 0. From the local compactness of X, the zero-homogeneity of RQp(f) and the fact that Ep(f) > 0 ∀XX, it follows that \(\tilde {\lambda }>0\). For the case when p > 1, we still need to prove the following three steps.

  1. (I)

    \(\lambda _{d+1}\ge \tilde {\lambda }\):

    Observe that \(\dim X^{\bot }=n-d\). Since the lp-norm is smooth and strictly convex for p > 1, for each f there is a unique gfX such that

    $$ \sum\limits_{i\in V}\deg(i)|f(i)-g_{f}(i)|^{p}=\min_{g\in X}\sum\limits_{i\in V}\deg(i)|f(i)-g(i)|^{p} $$

    and the map φ : ffgf is smooth. Moreover, \(\varphi |_{X^{\bot }}:X^{\bot }\to \varphi (X^{\bot })\) is bicontinuous (i.e., homeomorphism). Clearly, φ is such that − f↦ − fgf = −f + gf, therefore φ is odd. Hence, if we let f be the projection of f to X, we get an odd homeomorphism ψ : RnRn, \(f\mapsto f-g_{f^{\bot }}\).

    Thus, because of the homotopy property of the \(\mathbb {Z}_{2}\)-genus, for any S ∈Gend+ 1 we have that the image \(\psi ^{-1}(S)\in \text {Gen}_{d+1}\). Moreover, by the intersection property of the \(\mathbb {Z}_{2}\)-genus, ψ− 1(S) ∩ X, which implies Sψ(X) = ψ(ψ− 1(S) ∩ X)≠. Also note that ψ(X) = φ(X). Hence, for any S ∈Gend+ 1,

    $$ \sup_{f\in S}\text{RQ}_{p}(f)\ge \inf_{f\in \varphi(X^{\bot})}\text{RQ}_{p}(f)=\tilde{\lambda}. $$

    This proves that \(\lambda _{d+1}\ge \tilde {\lambda }\).

  2. (II)

    \(\lambda _{d+1}\le \tilde {\lambda }\):

    For any fXX, let \(X^{\prime }:=\text {span}(X\cup \{f\})\). Then, \(X^{\prime }\in \text {Gen}_{d+1}\) and

    $$ \begin{array}{@{}rcl@{}} \lambda_{d+1}\le \sup_{f^{\prime}\in X^{\prime}}\text{RQ}_{p}(f^{\prime})&=&\sup_{g\in X}\frac{E_{p}(f)}{{\sum\limits}_{i\in V}\deg(i)|f(i)+g(i)|^{p}}\\ &=&\frac{{\sum\limits}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{\min_{g\in X}{\sum\limits}_{i\in V}\deg(i)|f(i)-g(i)|^{p}}. \end{array} $$

    Since this holds for all fX, we derive that \(\lambda _{d+1}\le \tilde {\lambda }\).

  3. (III)

    There is no positive eigenvalue between λ1 = 0 and λd+ 1 > 0:

    Suppose the contrary and assume that f is an eigenfunction with eigenvalue \(\text {RQ}_{p}(f)\in (0,\tilde {\lambda })\). Then ∇RQp(f) = 0. Consider the function t↦RQp(ftgf). On the one hand,

    $$ \frac{d}{dt} |_{t=0}\text{RQ}_{p}(f-tg_{f})=-\langle\nabla \text{RQ}_{p}(f),g_{f}\rangle=0. $$

    On the other hand, Ep(ftgf) = Ep(f) and the function

    $$ t\mapsto \sum\limits_{i\in V}\deg(i)|f(i)-tg_{f}(i)|^{p} $$
    (21)

    is a strictly convex function with minimum at t = 1. This implies that (21) is strictly decreasing and convex on (− 1,1), thus

    $$ \frac{d}{dt} |_{t=0} \sum\limits_{i\in V}\deg(i)|f(i)-tg_{f}(i)|^{p}<0. $$

    Hence, we get \(\frac {d}{dt} |_{t=0}\text {RQ}_{p}(f-tg_{f})>0\), which leads to a contradiction.

This proves the case p > 1. Finally, we complete the proof of the case p = 1. Since

$$ \lambda_{d+1}({\Delta}_{p})\xrightarrow{p\rightarrow 1^{+}} \lambda_{d+1}({\Delta}_{1})\qquad \text{and}\qquad \tilde{\lambda}({\Delta}_{p})\xrightarrow{p\rightarrow 1^{+}} \tilde{\lambda}({\Delta}_{1}), $$

we only need to prove that (III) holds also for Δ1. Suppose the contrary and let \(\hat {f}\) be an eigenfunction corresponding to an eigenvalue \(\lambda \in (0,\tilde {\lambda })\). Then, \(0\in \nabla E_{1}(\hat {f})-\lambda \nabla \|\hat {f}\|_{1}\). Now, consider a flow near \(\hat {f}\) defined by η(f,t) := ftgf, where t ≥ 0 and \(f\in \mathbb {B}_{\delta }(f)\) for sufficiently small δ > 0. Note that

$$ E_{1}(f-tg_{f})-\lambda \|f-tg_{f}\|_{1}= E_{1}(f)-\lambda \|f-tg_{f}\|_{1} $$

is an increasing function of t, since ∥ftgf1 < ∥f1 and ∥⋅∥1 is convex. Consequently, by the theory of weak slope [11], we have that \(0\not \in \nabla (E_{1}(\hat {f})-\lambda \|\hat {f}\|_{1})=\nabla E_{1}(\hat {f})-\lambda \nabla \|\hat {f}\|_{1}\), which is a contradiction. This completes the proof. □

We shall now discuss some consequences of Theorem 6.1.

Corollary 6.3

For p ≥ 1,

$$ \lambda_{\min}\ge \min_{f\in\text{span}(\mathcal{I}^{h}:h\in H)}\frac{{\sum\limits}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}. $$

Proof

It follows immediately from Theorem 6.1. □

Corollary 6.4

For p ≥ 1, let \(\lambda _{p,\min \limits }\) be the smallest positive eigenvalue of the p-Laplacian. Then,

$$ \lambda_{p,\min}\ge \left\{\begin{array}{ll} |H|^{1-\frac p2} \lambda_{2,\min}^{\frac p2} &\quad\text{ if }p\ge2,\\ \text{Vol}(V)^{\frac p2-1}\lambda_{2,\min}^{\frac p2} &\quad\text{ if }p\le2. \end{array}\right. $$

Proof

For p ≤ 2, it is known that \({\sum }_{h\in H}|\langle \mathcal {I}^{h},f\rangle |^{p}\ge \left ({\sum }_{h\in H}|\langle \mathcal {I}^{h},f\rangle |^{2}\right )^{p/2}\) and

$$ \left( \frac{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}{\text{Vol}(V)}\right)^{\frac1p}\le \left( \frac{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{2}}{\text{Vol}(V)}\right)^{\frac12}. $$

Thus, applying Corollary 6.3, we have

$$ \lambda_{p,\min}\ge\min_{f\in\text{span}(\mathcal{I}^{h}:h\in H)}\text{Vol}(V)^{\frac p2-1}\left( \frac{{\sum\limits}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{2}}{ {\sum\limits}_{i\in V}\deg(i)|f(i)|^{2}}\right)^{\frac p2}=\text{Vol}(V)^{\frac p2-1}\lambda_{2,\min}^{\frac p2}. $$

The case of p ≥ 2 is similar. □

Remark 6.5

We further have

$$ \frac{\hat{\lambda}_{p,\min}}{\hat{\lambda}_{q,\min}}\ge \left\{\begin{array}{ll} |H|^{\frac 1p-\frac 1q} &\quad\text{if }p\ge q,\\ \text{Vol}(V)^{\frac 1q-\frac 1p} &\quad\text{if }p\le q, \end{array}\right. $$

where \(\hat {\lambda }_{p,\min \limits }=\lambda _{p,\min \limits }^{\frac 1p}\). This implies that

$$ \text{Vol}(V)^{\frac 1q-\frac 1p}\ge \frac{\hat{\lambda}_{p,\min}}{\hat{\lambda}_{q,\min}}\ge |H|^{\frac 1p-\frac 1q} \qquad\text{ if }p\ge q, $$

thus \(\hat {\lambda }_{p,\min \limits }\) is a continuous function of \(p\in [1,\infty )\) and the limit \(\lim _{p\to +\infty }\hat {\lambda }_{p,\min \limits }\in [0,n]\) exists.

Remark 6.6

For p ≥ 1, let

$$ C_{p}:=\max_{f\in\text{span}(\mathcal{I}^{h}:h\in H)}\frac{{\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}{\min_{g\in \text{span}(\mathcal{I}^{h}:h\in H)^{\bot}}{\sum\limits}_{i\in V}\deg(i)|f(i)-g(i)|^{p}}. $$

By Corollary 6.3 and Remark 6.5, we get that

$$ \lambda_{\min}\le C_{p} \cdot \min_{f\in\text{span}(\mathcal{I}^{h}:h\in H)}\frac{{\sum\limits}_{h\in H}|\langle\mathcal{I}^{h},f\rangle|^{p}}{ {\sum\limits}_{i\in V}\deg(i)|f(i)|^{p}}, $$

which can be seen as a dual inequality with respect to the one in Corollary 6.3. Note that the constant Cp is such that C2 = 1 for all oriented hypergraphs and C1 = 2 in the graph case.

Vertex Partition Problems

In [33], two vertex partition problems for oriented hypergraphs have been discussed: the k-coloring, that is, a function \(f:V\rightarrow \{1,\ldots ,k\}\) such that f(i)≠f(j) for all ijh and for all hH, and the generalized Cheeger problem. In this section we discuss more partition problems and we also define a new coloring number that takes signs into account as well.

In [33], the generalized Cheeger constant is defined as

$$ h:=\min_{\emptyset\neq S: \text{Vol} S\leq \frac{1}{2}\text{Vol} \overline{S}}~\frac{e(S)}{\text{Vol}(S)}, $$

where, given \(\emptyset \neq S\subseteq V\),

$$ e(S):=\sum\limits_{h\in H}\Big(\#(S\cap h_{in})-\#(S\cap h_{out})\Big)^{2}, $$

\(\overline {S}:=V\setminus S\) and

$$ \text{Vol}(S):=\sum\limits_{i\in S}\deg (i). $$

We generalize e(S) by letting, for p ≥ 1 and \(\emptyset \neq S\subseteq V\),

$$ e_{p}(S):=\sum\limits_{h\in H}|\#(S\cap h_{in})-\#(S\cap h_{out})|^{p}. $$

Remark 7.1

For a graph, ep(S) = e(S) = |(S)| is the number of edges between S and \(\overline {S}\), for all p. It measures, therefore, the flow between S and \(\overline {S}\). More generally, we can say that computing ep(S) (as well as VolS) means deleting all vertices in \(\overline {S}\), in the sense of [33, Definition 2.20], and then computing ep (respectively, the volume) on the vertex set of the sub-hypergraph obtained. Furthermore, when ep is computed on the vertex set,

$$ 0 \leq e_{p}(V)=\sum\limits_{h\in H}|\#h_{in}-\#h_{out}|^{p}\leq \sum\limits_{h\in H}|\#h|^{p}, $$

where the first inequality is an equality if and only if #hin = #hout for each hyperedge, and the second one is an equality if and only if there are either only inputs or only outputs. Hence, we could see the case ep(V ) = 0 as a balance condition. Having #hin = #hout means that what comes in is the same as what goes out. Hence, also in the general case we can say that ep measures a flow.

k-cut Problems

We now generalize the balanced minimum k-cut problem and the max k-cut problem, known for graphs [15, 35], to the case of hypergraphs.

Definition 7.2

Given k ∈{2,…,n}, the balanced minimum k-cut is

$$ \min_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k}\frac{e_{p}(V_{i})}{\text{Vol}(V_{i})}. $$

The maximum k-cut is

$$ \max_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k} e_{p}(V_{i}). $$

Lemma 7.3

For each \(\emptyset \neq S\subseteq V\) and for each p ≥ 1,

$$ \lambda_{1}\leq \frac{e_{p}(S)}{\text{Vol} S}\leq \lambda_{n}. $$

Therefore, in particular, for each k ∈{2,…,n}

$$ \lambda_{n}\geq \frac1k\cdot \max_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k}\frac{e_{p}(V_{i})}{\text{Vol}(V_{i})}\geq \frac{1}{k\cdot \text{Vol}(V)}\max_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k} e_{p}(V_{i}) $$

and

$$ \lambda_{1}\leq \frac1k\cdot \min_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k}\frac{e_{p}(V_{i})}{\text{Vol}(V_{i})}. $$

Proof

Let fC(V ) be 1 on S and 0 on \(\bar {S}\). Then,

$$ \text{RQ}_{p}(f)=\frac{e_{p}(S)}{\text{Vol}(S)}. $$

The second claim follows by applying the first one to all the Vi’s. □

Signed Coloring Number

We now introduce the new notion of signed coloring number, that takes into account also the input/output structure of the hypergraph. We denote by χ(Γ) the coloring number defined in [33].

Definition 7.4

A signed k-coloring of the vertices is a function f : V →{1,…,k} such that, for all hH, f(i)≠f(j) if i and j are anti-oriented in h. The signed coloring number of Γ, denoted χsign(Γ) is the minimal k such that there exists a signed k-coloring.

Remark 7.5

Note that χsign(Γ) ≤ χ(Γ). Also, χsign ≤ 2 if and only if Γ is bipartite.

Applying Lemma 7.3 to the signed coloring number, we get the following corollary.

Corollary 7.6

Let χsign := χsign(Γ) and let \(V_{1},\ldots ,V_{\chi _{\text {sign}}}\) be the corresponding coloring classes. For each p ≥ 1,

$$ \lambda_{1}\leq \frac{1}{\chi_{\text{sign}}}\left( \sum\limits_{i=1}^{\chi_{\text{sign}}}\sum\limits_{h\in H}\frac{|\#(V_{i}\cap h)|^{p}}{\text{Vol}(V_{i})}\right) \leq \lambda_{n}. $$
(22)

Also, the upper bound in (22) shrinks to an equality for p = 1.

Proof

The first fact follows from Lemma 7.3 since, by definition of signed coloring number,

$$ |\#(V_{i}\cap h_{in})-\#(V_{i}\cap h_{out})|=\#(V_{i}\cap h), $$

for each coloring class Vi.

In the particular case of p = 1, \({\sum \limits }_{h\in H}\#(V_{i}\cap h)=\text {Vol}(V_{i})\) for each i, therefore

$$ \frac{1}{\chi_{\text{sign}}}\left( \sum\limits_{i=1}^{\chi_{\text{sign}}}\sum\limits_{h\in H}\frac{\#(V_{i}\cap h)}{\text{Vol}(V_{i})}\right)=1. $$

Since we know, from Lemma 4.2, that \(\max \limits _{f}\text {RQ}_{1}(f)=1\), this proves that the upper bound in (22) shrinks to an equality for p = 1. □

Remark 7.7

The fact that the upper bound in (22) shrinks to an equality for p = 1 is particularly interesting because this is similar to what happens for the Cheeger constant h in the case of graphs, and for the Cheeger-like constant Q defined in [22] for graphs and generalized in [31] for hypergraphs. In fact, we have that:

  1. 1.

    For connected graphs, the Cheeger constant h can be used for bounding λ2 in the case of Δ2 and, as shown in [8, 17], it is equal to λ2 in the case of Δ1.

  2. 2.

    For general hypergraphs, the Cheeger-like constant Q can be used for bounding λn in the case of Δ2 and \({{\Delta }^{H}_{2}}\), and it is equal to λn in the case of \({{\Delta }^{H}_{1}}\) (cf. [31]).

  3. 3.

    In (22) we again have something similar, because the quantity that bounds λn from below for Δp equals λn for Δ1.

Of course, the main difference between the last case and the first two is that h and Q are constants that are independent of p, while the quantity in (22) changes when p changes.

Remark 7.8

In the case of graphs, by definition of signed coloring number we have that #(Vih) ∈{0,1} for each coloring class Vi and for each edge h. In particular,

$$ \sum\limits_{e\in E}|\#(V_{i}\cap e)|^{p}= \text{Vol}(V_{i}) $$

and the constant appearing in (22) is equal to 1 for all p.

Multiway Partitioning

In this section we generalize the notion of k-cut and we use it for bounding the smallest and largest eigenvalue of the classical Laplacian Δ2.

Definition 7.9

A k-tuple (S1,…,Sk) of sets \(S_{r}\subseteq V\) is called a (k,l)-family if it covers S1 ∪… ∪ Sk exactly l-times (i.e., each vertex iS1 ∪… ∪ Sk lies in exactly l sets \(S_{i_{1}},\ldots ,S_{i_{l}}\)). If, furthermore, S1 ∪… ∪ Sk = V, then we call the (k,l)-family a (k,l)-cover.

Remark 7.10

A (k,1)-cover is a k-partition (or k-cut).

Theorem 7.11

Let λ1 and λn be the smallest and the largest eigenvalue of the classical normalized Laplacian Δ2, respectively. For any (k,l)-family,

$$ \lambda_{1}\leq \frac{k\cdot \big(e(S_{1})+\cdots+ e(S_{k})\big)-l^{2}\cdot e(S_{1}\cup{\cdots} \cup S_{k})}{(k-l)\cdot l\cdot \text{Vol}(S_{1}\cup{\cdots} \cup S_{k})} \leq \lambda_{n}. $$

Proof

We first focus on the case that (S1,…,Sk) is a (k,l)-cover. For r ∈{1,…,k}, define a function \(f_{r}:V\to \mathbb {R}\) by

$$ f_{r}(i)=\left\{\begin{array}{ll} t&\quad\text{ if }i\in S_{r},\\ s&\quad\text{ if }i\not\in S_{r}. \end{array}\right. $$

Then

$$ \begin{array}{@{}rcl@{}} &&\left( \sum\limits_{j\in h_{in}} f_{r}(j)-\sum\limits_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right)^{2}\\ &&= \left( t\#(S_{r}\cap h_{in})-t\#(S_{r}\cap h_{out})+s(\#h_{in}\setminus S_{r})-s(\#h_{out}\setminus S_{r})\right)^{2}\\ &&= \left( t\#(S_{r}\cap h_{in})-t\#(S_{r}\cap h_{out})+s\#h_{in}-s\#(S_{r}\cap h_{in})-s\#h_{out}+s\#(S_{r}\cap h_{out})\right)^{2}\\ &&= \left( (t-s)(\#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out}))+s(\#h_{in}-\#h_{out})\right)^{2}\\ &&= (t-s)^{2}\left( \#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out})\right)^{2}+s^{2}\left( \#h_{in}-\#h_{out}\right)^{2}\\ &&\quad+2(t-s)s\left( \#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out})\right)(\#h_{in}-\#h_{out}). \end{array} $$

Consequently,

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{r=1}^{k}\sum\limits_{h\in H} \left( \sum\limits_{j\in h_{in}} f_{r}(j)-\sum\limits_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right)^{2}\\ &&= (t-s)^{2}\sum\limits_{r=1}^{k}\sum\limits_{h\in H}\left( \#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out})\right)^{2} + s^{2}k\sum\limits_{h\in H}\left( \#h_{in}-\#h_{out}\right)^{2}\\ &&\quad+2(t-s)s\sum\limits_{h\in H}\sum\limits_{r=1}^{k}\left( \#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out})\right)(\#h_{in}-\#h_{out})\\ &&= (t-s)^{2}\sum\limits_{r=1}^{k}\sum\limits_{h\in H}\left( \#(S_{r}\cap h_{in}) - \#(S_{r}\cap h_{out})\right)^{2\!}+\!(s^{2}k + 2(t - s)sl)\sum\limits_{h\in H}(\#h_{in}-\#h_{out})^{2}\\ &&= (t-s)^{2}\sum\limits_{r=1}^{k} e(S_{r})+(s^{2}k+2(t-s)sl)e(V), \end{array} $$

where we have used the equality

$$ \sum\limits_{r=1}^{k}(\#(S_{r}\cap h_{in})-\#(S_{r}\cap h_{out})=l(\#h_{in}-\#h_{out}), $$

since each vertex in h is covered l times by S1,…,Sk (lk).

Also, \({\sum \limits }_{i\in V} \deg (i)f_{r}(i)^{2}=\text {Vol}(S_{r})t^{2}+(\text {Vol}(V)-\text {Vol}(S_{r}))s^{2}\) and \({\sum }_{r=1}^{k}\text {Vol}(S_{r})=l\text {Vol}(V)\). Hence,

$$ \sum\limits_{r=1}^{k}\sum\limits_{i\in V} \deg(i)f_{r}(i)^{2}=l\text{Vol}(V)(t^{2}-s^{2})+k\text{Vol}(V)s^{2}. $$

By the basic inequality

$$ \lambda_{n}\ge \frac{{\sum\limits}_{r=1}^{k}{\sum\limits}_{h\in H} \left( {\sum\limits}_{j\in h_{in}} f_{r}(j)-{\sum\limits}_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right)^{2}}{{\sum\limits}_{r=1}^{k}{\sum\limits}_{i\in V} \deg(i)f_{r}(i)^{2}} \ge\lambda_{1}, $$

we have

$$ \eta(t,s):=\frac{(t-s)^{2}{\sum\limits}_{r=1}^{k} e(S_{r})+(s^{2}k+2(t-s)sl)e(V)}{\text{Vol}(V)(l(t^{2}-s^{2})+k s^{2})} \in[\lambda_{1},\lambda_{n}]. $$

We can verify that the minimum and maximum of the above quantity belong to

$$ \left\{\frac{{\sum\limits}_{r=1}^{k} e(S_{r})}{\text{Vol}(V)}\frac{k}{l(k-l)}-\frac{e(V)}{\text{Vol}(V)}\frac{l}{k-l},~\frac{e(V)}{\text{Vol}(V)}\right\}. $$

To see this, we make the following observations.

  1. 1.

    For s = 0, the quantity η(t,s) is \(\frac {{\sum \limits }_{r=1}^{k}e_{r}(S_{r})}{l\text {Vol}(V)}\).

  2. 2.

    For s≠ 0, the quantity η(t,s) is

    $$ \begin{array}{@{}rcl@{}} &&\frac{(\frac ts-1)^{2}\frac{1}{l}{\sum\limits}_{r=1}^{k} e(S_{r})+\left( \frac kl+2(\frac ts-1)\right)e(V)}{\text{Vol}(V)\left( (\frac ts)^{2}-1+\frac kl\right)}\\ &&=\frac{e(V)}{\text{Vol}(V)}+\frac{(\frac ts-1)^{2}}{(\frac ts)^{2}-1+\frac kl}\frac{\frac1l{\sum\limits}_{r=1}^{k} e(S_{r})-e(V)}{\text{Vol}(V)}. \end{array} $$

    In fact, since \(\max \limits _{(t,s)\ne (0,0)}\frac {(\frac ts-1)^{2}}{(\frac ts)^{2}-1+\frac kl}=\frac {k}{k-l}\) and \(\min \limits _{(t,s)\ne (0,0)}\frac {(\frac ts-1)^{2}}{(\frac ts)^{2}-1+\frac kl}=0\), kl,

    $$ \left\{\max_{s\ne0,t\in\mathbb{R}}\eta(t,s), \min_{s\ne0,t\in\mathbb{R}}\eta(t,s)\right\} = \left\{\frac{e(V)}{\text{Vol}(V)}, \frac{e(V)}{\text{Vol}(V)} + \frac{k}{k - l}\frac{\frac1l{\sum\limits}_{r=1}^{k} e(S_{r}) - e(V)}{\text{Vol}(V)}\right\}. $$

The proof of the claim is then completed by observing that

$$ \frac{{\sum\limits}_{r=1}^{k}e_{r}(S_{r})}{l\text{Vol}(V)}=\frac lk \frac{e(V)}{\text{Vol}(V)}+\left( 1-\frac lk\right)\left( \frac{e(V)}{\text{Vol}(V)}+ \frac{k}{k-l}\frac{\frac1l{\sum\limits}_{r=1}^{k} e(S_{r})-e(V)}{\text{Vol}(V)}\right). $$

For a general (k,l)-family, we can consider \(V^{\prime }:=S_{1}\cup \ldots \cup S_{k}\) and \(H^{\prime }:=\{h\cap V^{\prime }:h\in H\}\). Then, \({{\varGamma }}^{\prime }:=(V^{\prime },H^{\prime })\) is a sub-hypergraph of H restricted to \(V^{\prime }\). According to [33, Lemma 2.21], \(\lambda _{n}({{\varGamma }})\ge \lambda _{\max \limits }({{\varGamma }}^{\prime })\) and \(\lambda _{1}({{\varGamma }})\le \lambda _{1}({{\varGamma }}^{\prime })\). Applying the case of the (k,l)-cover to \({{\varGamma }}^{\prime }\), we complete the proof. □

Corollary 7.12

$$ \lambda_{1}\leq \frac{k(e(S_{1})+\cdots+ e(S_{k}))-l^{2}e(V)}{(k-l)l\text{Vol}(V)}\leq \lambda_{n}. $$

Remark 7.13

For a graph, e(S) = |(S)| and a (2,1)-cover is a standard 2-cut. Theorem 7.11 shows that

$$ \lambda_{n}\ge 4\max_{S\subset V}\frac{|\partial(S)|}{\text{Vol}(V)}, $$

where \(2\max \limits _{S\subset V}\frac {|\partial (S)|}{\text {Vol}(V)}\) is the normalized max-cut ratio.

Also, Theorem 7.11 applied to (2,1)-families for a graph implies that

$$ \begin{array}{@{}rcl@{}} \lambda_{n}&\ge& \max_{S_{1}\cap S_{2}=\emptyset}\frac{2(e(S_{1})+e(S_{2}))-e(S_{1}\cup S_{2})}{\text{Vol}(S_{1})+\text{Vol}(S_{2})}\\ &=&\max_{S_{1}\cap S_{2}=\emptyset}\frac{4|E(S_{1},S_{2})|+|\partial(S_{1}\cup S_{2})|}{\text{Vol}(S_{1})+\text{Vol}(S_{2})}\\ &\ge& 2\max_{S_{1}\cap S_{2}=\emptyset}\frac{2|E(S_{1},S_{2})|}{\text{Vol}(S_{1})+\text{Vol}(S_{2})}, \end{array} $$

where \(\max \limits _{S_{1}\cap S_{2}=\emptyset }\frac {2|E(S_{1},S_{2})|}{\text {Vol}(S_{1})+\text {Vol}(S_{2})}\) is exactly the dual Cheeger constant [2].

Interestingly, applying Theorem 7.11 to a (k,1)-cover of a graph, we get

$$ \lambda_{n}\ge \frac{k}{k-1}\cdot \frac{\max_{\text{partition }(V_{1},\ldots,V_{k})}{\sum\limits}_{i=1}^{k}|\partial V_{i}|}{\text{Vol}(V)} $$

which relates to the max k-cut problem.

General Partitions

Lemma 7.14

We have

$$ \lambda_{n}({\Delta}_{p})\ge \max_{t\in\mathbb{R}, c\in[0,1]}\max_{\text{partition }(V_{1},\ldots,V_{k})}\frac{c^{p-1}|t+1|^{p}{\sum}_{r=1}^{k} e_{p}(V_{r})-\left( \frac{c}{1-c}\right)^{p-1}ke_{p}(V)}{\text{Vol}(V)(|t|^{p}+k-1)} $$
(23)

and

$$ \lambda_{1}({\Delta}_{p})\le \min_{t\in\mathbb{R}}\min_{\text{partition }(V_{1},\ldots,V_{k})}\frac{|t+1|^{p}{\sum}_{r=1}^{k} e_{p}(V_{r})+ke_{p}(V)}{\text{Vol}(V)(|t|^{p}+k-1)}. $$
(24)

Proof

Given a partition (V1,…,Vk) of V, given r ∈{1,…,k}, define a function \(f_{r}:V\to \mathbb {R}\) by

$$ f_{r}(i):=\left\{\begin{array}{ll} t&\quad\text{ if }i\in V_{r},\\ -1&\quad\text{ if }i\not\in V_{r}. \end{array}\right. $$

Then,

$$ \begin{array}{@{}rcl@{}} &&\left|\sum\limits_{j\in h_{in}} f_{r}(j)-\sum\limits_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right|^{p} \\ &&= \left|t\#(V_{r}\cap h_{in})-t\#(V_{r}\cap h_{out})-\#h_{in}\setminus V_{r}+\#h_{out}\setminus V_{r}\right|^{p} \\ &&= |(t+1)(\#(V_{r}\cap h_{in})-\#(V_{r}\cap h_{out}))-(\#h_{in}-\#h_{out})|^{p} \\ &&\le |t+1|^{p}|\#(V_{r}\cap h_{in})-\#(V_{r}\cap h_{out})|^{p}+|\#h_{in}-\#h_{out}|^{p}. \end{array} $$
(25)

Consequently,

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{r=1}^{k}\sum\limits_{h\in H} \left|\sum\limits_{j\in h_{in}} f_{r}(j)-\sum\limits_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right|^{p}\\ &&\le\sum\limits_{r=1}^{k}\sum\limits_{h\in H}|t+1|^{p}|\#(V_{r}\cap h_{in})-\#(V_{r}\cap h_{out})|^{p}+|\#h_{in}-\#h_{out}|^{p}\\ &&= |t+1|^{p}\sum\limits_{r=1}^{k} e_{p}(V_{r})+ke_{p}(V). \end{array} $$

Also, we have

$$ \sum\limits_{r=1}^{k}\sum\limits_{i\in V} \deg(i)|f_{r}(i)|^{p}=\text{Vol}(V)(|t|^{p}-1)+k\text{Vol}(V). $$

Now, note that

$$ \lambda_{1}\!\le\! \frac{{\sum\limits}_{r=1}^{k}{\sum\limits}_{h\in H} \left|{\sum\limits}_{j\in h_{in}} f_{r}(j)-{\sum\limits}_{j^{\prime}\in h_{out}} f_{r}(j^{\prime})\right|^{p}}{{\sum\limits}_{r=1}^{k}{\sum\limits}_{i\in V} \deg(i)|f_{r}(i)|^{p}}\!\le\! \frac{|t+1|^{p}{\sum\limits}_{r=1}^{k} e_{p}(V_{r})+ke_{p}(V)}{\text{Vol}(V)(|t|^{p}+k - 1)}. $$

Next, we give a lower bound for (25). By the convexity of t↦|t|p, we have

$$ c\left|\frac{1}{c}(B-A)\right|^{p}+(1-c)\left|\frac{1}{1-c}A\right|^{p} \ge |B|^{p} \qquad \forall A,B\in \mathbb{R},~0<c<1, $$

which implies \(|B-A|^{p}\ge c^{p-1}|B|^{p}-(\frac {c}{1-c})^{p-1}|A|^{p}\). Thus,

$$ \begin{array}{@{}rcl@{}} &&|(t+1)(\#(V_{r}\cap h_{in})-\#(V_{r}\cap h_{out}))-(\#h_{in}-\#h_{out})|^{p}\\ &&\ge c^{p-1}|(t+1)(\#(V_{r}\cap h_{in})-\#(V_{r}\cap h_{out}))|^{p}-\left( \frac{c}{1-c}\right)^{p-1}|(\#h_{in}-\#h_{out})|^{p}. \end{array} $$

Finally, the same method gives

$$ \frac{c^{p-1}|t+1|^{p}{\sum\limits}_{r=1}^{k} e_{p}(V_{r})-\left( \frac{c}{1-c}\right)^{p-1}ke_{p}(V)}{\text{Vol}(V)(|t|^{p}+k-1)}\le\lambda_{n}. $$

Corollary 7.15

The following constants are smaller than or equal to λnp):

$$ \begin{array}{@{}rcl@{}} &&\frac{2(\frac k2)^{p-1}{\sum\limits}_{r=1}^{k} e_{p}(V_{r})-ke_{p}(V)}{((k-1)^{p}+k-1)\text{Vol}(V)},\quad \frac{k{\sum\limits}_{r=1}^{k} e_{p}(V_{r})-\frac{k}{(k-1)^{p-1}}e_{p}(V)}{((k-1)^{p}+k-1)\text{Vol}(V)},\\ &&\frac{2{\sum\limits}_{r=1}^{k} e_{p}(V_{r})-ke_{p}(V)}{k\text{Vol}(V)}. \end{array} $$

Proof

Taking t = k − 1 and \(c=\frac 12\) in (23), we have the first.

Taking t = k − 1 and \(c=\frac 1k\) in (23), we get the middle one.

Taking t = 1 and \(c=\frac 12\) in (23), we obtain the last one. □

Corollary 7.16

The following constants are larger than or equal to λ1p):

$$ \frac{e_{p}(V)}{\text{Vol}(V)},\quad \frac{{\sum}_{r=1}^{k}e_{p}(V_{r})}{\text{Vol}(V)}. $$

Proof

Taking t = − 1 in (24), we get the first constant. Letting \(t\to \infty \) in (24), we obtain the second one. □

Corollary 7.17

$$ \lambda_{n}({\Delta}_{1})\ge \max_{\text{partition }(V_{1},\ldots,V_{k})}\frac{{\sum\limits}_{r=1}^{k} e_{1}(V_{r}) }{\text{Vol}(V) }\quad\text{ and }\quad \lambda_{1}({\Delta}_{1})\le \min\limits_{\text{partition }(V_{1},\ldots,V_{k})}\frac{ {\sum\limits}_{r=1}^{k} e_{1}(V_{r}) }{\text{Vol}(V)}. $$

Proof

Taking p = 1 in Lemma 7.14, we have

$$ \begin{array}{@{}rcl@{}} \lambda_{n}({\Delta}_{1})&\ge& \max_{t\in\mathbb{R}, c\in[0,1]}\max\limits_{\text{partition }(V_{1},\ldots,V_{k})}\frac{|t+1|{\sum\limits}_{r=1}^{k} e(V_{r})-ke(V)}{\text{Vol}(V)(|t|+k-1)}\\ &=& \max_{\text{partition }(V_{1},\ldots,V_{k})}\max_{t\in\mathbb{R}}\frac{|t+1|{\sum\limits}_{r=1}^{k} e(V_{r})-ke(V)}{\text{Vol}(V)(|t|+k-1)}\\ &=&\max_{\text{partition }(V_{1},\ldots,V_{k})}\frac{ {\sum\limits}_{r=1}^{k} e(V_{r}) }{\text{Vol}(V)} \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} \lambda_{1}({\Delta}_{1})&\le& \min_{t\in\mathbb{R}}\min_{\text{partition }(V_{1},\ldots,V_{k})}\frac{|t+1|{\sum\limits}_{r=1}^{k} e_{1}(V_{r})+ke_{1}(V)}{\text{Vol}(V)(|t|+k-1)}\\ &=&\min\left\{\frac{e_{1}(V)}{\text{Vol}(V)},\min_{\text{partition }(V_{1},\ldots,V_{k})}\frac{ {\sum\limits}_{r=1}^{k} e(V_{r}) }{\text{Vol}(V) } \right\}. \end{array} $$

Hyperedge Partition Problems

While in the previous section we have discussed vertex partition problems and their relation to Δp, here we introduce the analogous hyperedge partition problems and their relations with \({{\Delta }_{p}^{H}}\). We start by defining, for each \(\emptyset \neq \hat {H}\subset H\), a quantity \(e_{p}(\hat {H})\) analogous to the quantity ep(S) defined for subsets of vertices. Namely, we let

$$ e_{p}(\hat{H}):=\sum\limits_{i\in V}\frac{1}{\deg (i)}|\#(\hat{H}\cap i_{in})-\#(\hat{H}\cap i_{out})|^{p} $$

where, given iV, we let

$$ \begin{array}{@{}rcl@{}} i_{in}&:=&\#\{\text{ hyperedges in which \textit{i} is an input }\},\\ i_{out}&:=&\#\{\text{ hyperedges in which \textit{i} is an output }\}. \end{array} $$

We also define

$$ \eta_{p}(\hat{H}):=\frac{e_{p}(\hat{H})}{\#\hat{H}}. $$

Remark 8.1

Analogously to the vertex case, we can say that computing \(e_{p}(\hat {H})\) means deleting all hyperedges in \(H\setminus \hat {H}\) and then computing ep on the hyperedge set of the sub-hypergraph obtained. It is therefore interesting to observe that, when ep is computed on H,

$$ 0\leq e_{p}(H)=\sum\limits_{i\in V}\frac{1}{\deg (i)}\cdot |\# i_{in}-\# i_{out}|^{p}\leq \sum\limits_{i\in V}\deg (i)^{p-1}, $$

where the first inequality is an equality if and only if each vertex is as often an input as an output, while the second one is an equality if and only if all vertices have the same sign for all hyperedges in which the graph is contained.

Furthermore, if the sub-hypergraph \(\hat {{{\varGamma }}}:=(V,\hat {H})\) of Γ is bipartite, without loss of generality we can assume that each vertex is either always an input or always an output for each hyperedge in which it is contained. In this case,

$$ e_{p}(\hat{H})=\sum\limits_{i\in V}\frac{\deg_{\hat{{{\varGamma}}}}(i)^{p}}{\deg (i)} $$

and, in particular, \(\eta _{p}(\hat {H})\) coincides with the quantity in [31, Definition 2.9]. Moreover, in the particular case when \(\hat {H}=\{h\}\) is given by one single hyperedge, then

$$ e_{p}(\{h\})=\eta_{p}(\{h\})\sum\limits_{i\in h}\frac{1}{\deg i}\quad \text{ for all }p. $$

We now generalize [31, Lemma 4.1] for all p.

Proposition 8.2

For all p, we have that

$$ \max_{\hat{{{\varGamma}}}=(V,\hat{H})\subset{{\varGamma}} \text{ bipartite}} \eta_{p}(\hat{H})\leq \mu_{m}, $$

with equality if p = 1.

Proof

Let \(\gamma ^{\prime }:H\rightarrow \mathbb {R}\) be 1 on \(\hat {H}\) and 0 otherwise. Then, up to changing (without loss of generality) the orientations of the hyperedges,

$$ \begin{array}{@{}rcl@{}} \mu_{m}&=&\max_{\gamma:H\rightarrow\mathbb{R}}\frac{{\sum}_{i\in V}\frac{1}{\deg (i)}\cdot \left( {\sum\limits}_{h_{\text{in}}: i\text{ input}}\gamma(h_{\text{in}})-{\sum\limits}_{h_{\text{out}}: i\text{ output}}\gamma(h_{\text{out}})\right)^{p}}{{\sum}_{h\in H}|\gamma(h)|^{p}}\\ &\geq& \frac{{\sum\limits}_{i\in V}\frac{1}{\deg (i)}\cdot \left( {\sum\limits}_{h_{\text{in}}: v\text{ input}}\gamma^{\prime}(h_{\text{in}})-{\sum\limits}_{h_{\text{out}}: v\text{ output}}\gamma^{\prime}(h_{\text{out}})\right)^{p}}{{\sum}_{h\in H}|\gamma^{\prime}(h)|^{p}}\\ &\geq& \frac{{\sum\limits}_{i\in \hat{V}}\frac{1}{\deg (i)}\cdot \left( {\sum\limits}_{h_{\text{in}}: i\text{ input}}\gamma^{\prime}(h_{\text{in}})-{\sum\limits}_{h_{\text{out}}: i\text{ output}}\gamma^{\prime}(h_{\text{out}})\right)^{p}}{{\sum\limits}_{h\in H}|\gamma^{\prime}(h)|^{p}}\\ &=& \frac{{\sum\limits}_{i\in \hat{V}}\frac{\deg_{\hat{{{\varGamma}}}}(i)^{p}}{\deg (i)}}{|\hat{H}|}=\eta_{p}(\hat{H}). \end{array} $$

Since the above inequality is true for all \(\hat {{{\varGamma }}}\), this proves the first claim.

If p = 1, then

$$ \mu_{m}\geq \max_{\hat{{{\varGamma}}}=(V,\hat{H})\text{ bipartite}}\eta_{p}(\hat{H})\geq \max_{h\in H}\eta_{p}(\{h\})=Q, $$

where Q is the Cheeger-like quantity defined in [31]. By [31, Lemma 5.2], Q = μm. Therefore, the last inequalities shrink to equalities. □

Now, analogously to the vertex partition problems, we discuss hyperedge partition problems.

Definition 8.3

A k-hyperedge partition is a partition of the hyperedge set into k disjoint sets, H = H1 ⊔… ⊔ Hk. The balanced minimum k-hyperedge cut is

$$ \min_{\text{partition }(H_{1},\ldots,H_{k})}\sum\limits_{i=1}^{k}\eta_{p}(H_{i}); $$

The maximum k-hyperedge cut is

$$ \max_{\text{partition }(H_{1},\ldots,H_{k})}\sum\limits_{i=1}^{k}e_{p}(H_{i}). $$

The signed hyperedge coloring number, denoted \(\chi _{\text {sign}}^{H}\), is the minimal k for which there exists a function \(\gamma :H\rightarrow \{1,\ldots ,k\}\) such that, for all iV, \(\gamma (h)\neq \gamma (h^{\prime })\) if i is an input for h and an output for \(h^{\prime }\).

The following lemma is the analog of some results regarding vertex partition problems. It relates the balanced minimum k-hyperedge cut and the maximum k-hyperedge cut to the smallest and largest eigenvalues of \({{\Delta }_{p}^{H}}\), respectively.

Lemma 8.4

For each \(\emptyset \neq \hat {H}\subseteq H\) and for each p ≥ 1, \(\mu _{1}\leq \eta _{p}(\hat {H})\leq \mu _{m}\). Therefore, in particular, for each k ∈{2,…,n}

$$ \mu_{m}\geq \frac1k\cdot \max_{\text{partition }(H_{1},\ldots,H_{k})}\sum\limits_{i=1}^{k}\eta_{p}(H_{i})\geq \frac{1}{k\cdot \# H}\max_{\text{partition }(H_{1},\ldots,H_{k})}\sum\limits_{i=1}^{k} e_{p}(H_{i}) $$

and

$$ \mu_{1}\leq \frac1k\cdot \min_{\text{partition }(V_{1},\ldots,V_{k})}\sum\limits_{i=1}^{k}\eta_{p}(H_{i}). $$

Proof

Given Hi, let γC(H) be 1 on Hi and 0 otherwise. Then, RQp(γ) = ηp(Hi). Therefore, μ1ηp(Hi) ≤ μm. The other claims follow by applying these inequalities to all elements of a partition. □

Corollary 8.5

Let \(\chi _{\text {sign}}^{H}\) be the signed hyperedge coloring number of Γ and let \(H_{1},\ldots ,H_{\chi _{\text {sign}}^{H}}\) be the corresponding coloring classes. Let also Γj := (V,Hj) for \(j\in \{1,\ldots ,\chi _{\text {sign}}^{H}\}\). For each p ≥ 1,

$$ \mu_{1}\leq \frac{1}{\chi_{\text{sign}}^{H}}\left( \sum\limits_{j=1}^{\chi_{\text{sign}}^{H}}\frac{1}{\# H_{j}}\cdot\sum\limits_{i\in V}\frac{\#(H_{j}\cap i)^{p}}{\deg (i)}\right) \leq \mu_{m}. $$

Proof

By the definition of signed hyperedge coloring number,

$$ e_{p}(H_{j})=\sum\limits_{i\in V}\frac{1}{\deg (i)}|\#(H_{j}\cap i_{in})-\#(H_{j}\cap i_{out})|^{p}=\sum\limits_{i\in V}\frac{\#(H_{j}\cap i)^{p}}{\deg (i)}, $$

for each coloring class Hj. Together with Lemma 8.4, this proves the claim. □