Abstract
The pLaplacian for graphs, as well as the vertex Laplace operator and the hyperedge Laplace operator for the general setting of oriented hypergraphs, are generalized. In particular, both a vertex pLaplacian and a hyperedge pLaplacian are defined for oriented hypergraphs, for all p ≥ 1. Several spectral properties of these operators are investigated.
Introduction
Oriented hypergraphs are hypergraphs with the additional structure that each vertex in a hyperedge is either an input, an output or both. They have been introduced in [21], together with two normalized Laplace operators whose spectral properties and possible applications have been investigated also in further works [1, 31,32,33]. Here we generalize the Laplace operators on oriented hypergraphs by introducing, for each \(p\in \mathbb {R}_{\geq 1}\), two pLaplacians. While the vertex pLaplacian is a known operator for graphs (see for instance [5, 14, 39]), to the best of our knowledge the only edge pLaplacian for graphs that has been defined is the classical one for p = 2.
Structure of the Paper
In Section 1.1, for completeness of the theory, we discuss the pLaplacian on Euclidean domains and Riemannian manifolds, and in Section 1.2 we recall the basic notions on oriented hypergraphs. In Section 2 we define the pLaplacians for p > 1 and we establish their generalized minmax principle, and similarly, in Section 3, we introduce and discuss the 1Laplacians for oriented hypergraphs. Furthermore, in Section 4 we discuss the smallest and largest eigenvalues of the pLaplacians for all p, in Section 5 we prove two nodal domain theorems, and in Section 6 we discuss the smallest nonzero eigenvalue. Finally, in Section 7 we discuss several vertex partition problems and their relations to the pLaplacian eigenvalues, while in Section 8 we discuss hyperedge partition problems.
In [23] we shall build upon the results developed in this paper.
Related Work
It is worth mentioning that, in [18], other vertex pLaplacians for hypergraphs have been introduced and studied. While these generalized vertex pLaplacians coincide with the ones that we introduce here in the case of graphs, they do not coincide for general hypergraphs. Also, [18] focuses on classical hypergraphs, while we consider, more generally, oriented hypergraphs.
The pLaplacian on Euclidean Domains and Riemannian Manifolds
There is a strong analogy between Laplace operators on Euclidean domains and Riemannian manifolds on one hand and their discrete versions on graphs and hypergraphs, and this is also some motivation for our work. Therefore, it may be useful to briefly summarize the theory on Euclidean domains and Riemannian manifolds.
Let \({{\varOmega }} \subset \mathbb {R}^{n}\) be a bounded domain, with piecewise Lipschitz boundary ∂Ω, in order to avoid technical issues that are irrelevant for our purposes. More generally, Ω could also be such a domain in a Riemannian manifold.
Let first \(1<p<\infty \). For u in the Sobolev space W^{1,p}(Ω), we may consider the functional
Its Euler–Lagrange operator is the pLaplacian
for p = 2, we have, of course, the standard Laplace operator. Note that we use the − sign in (2) both to make the operator a positive one and to conform to the conventions used in this paper. The eigenvalue problem arises when we look for critical points of I_{p} under the constraint
or equivalently, if we seek critical points of the Rayleigh quotient
among functions u≢0. To make the problem well formulated, we need to impose a boundary condition, and we consider here the Dirichlet condition
On a compact Riemannian manifold M with boundary ∂M, we can do the same when we integrate in (1), (3) with respect to the Riemannian volume measure, and let ∇ and div denote the Riemannian gradient and divergence operators. When ∂M = ∅, we do not need to impose a boundary condition.
Eigenfunctions and eigenvalues then have to satisfy the equation
For \(1<p <\infty \), the functionals in (1) and (3) are strictly convex, and the spectral theory is similar to that for p = 2, that is, the case of the ordinary Laplacian, which is a well studied subject. (See for instance [41] for the situation on a Riemannian manifold). For p = 1, however, the functionals are no longer strictly convex, and things get more complicated. (4) then formally becomes
In (4) for p > 1, we may put the right hand side = 0 at points where u = 0, but this is no longer possible in (5). This eigenvalue problem has been studied by Kawohl, Schuricht and their students and collaborators, as well as by Chang, and we shall summarize their results. Some references are [7, 24,25,26,27,28,29,30, 34, 37].
One therefore formally replaces (5) by defining substitute z of \(\frac {\nabla u}{\nabla u}\) and a substitute s of \(\frac {u}{u}\), leading to
where \(s\in L^{\infty }({{\varOmega }})\) satisfies
with
and the vector field \(z\in L^{\infty }({{\varOmega }},\mathbb {R}^{n})\) satisfies
where
Again (7) needs some explanation. In fact, while for p > 1, the natural space to work in is W^{1,p}(Ω), for p = 1, it is no longer W^{1,1}(Ω), but rather BV (Ω). This space (for a short introduction, see for instance [20]) consists of all functions L^{1}(Ω) for which
Note that when u ∈ C^{1}(Ω), we have
and thus, BV functions permit such an integration by parts in a weak sense. More precisely, for a BV function u, its distributional gradient is represented by a finite \(\mathbb {R}^{n}\) valued signed measure Dudx, and we can write
Also, u ∈ BV (Ω) has a welldefined trace u^{∂Ω} ∈ L^{1}(∂Ω), and (8) generalizes to
where ν is the outer unit normal of ∂Ω. Importantly, BV functions can be discontinuous along hypersurfaces. A Borel set E ⊂Ω has finite perimeter if its characteristic function χ_{E} satisfies
For instance, if the boundary of E is a compact Lipschitz hypersurface, then the perimeter of E is simply the Hausdorff measure \({\mathscr{H}}^{n1}(\partial E)\). And if E ⊂Ω, we have
The problem with (6), however, is that in general it has too many solutions, as it becomes rather arbitrary on sets of positive measure where u vanishes, see [30]. The solutions that one is really interested in should be the critical points of a variational principle, with the vanishing of the weak slope of [11] as the appropriate criterion. Inner variations provide another necessary criterion [30]. Viscosity solutions provide another criterion which, however, is still not stringent enough [26].
The Cheeger constant of Ω then is defined as
where E is the Lebesgue measure of E. A set realizing the infimum in (9) is called a Cheeger set, and every bounded Lipschitz domain Ω possesses at least one Cheeger set. For such a Cheeger set E ⊂Ω, ∂E ∩Ω is smooth except possibly for a singular set of Hausdorff dimension at most n − 8 and of constant mean curvature \(\frac {1}{n1}h_{1}({{\varOmega }})\) at all regular points. When Ω is not convex, its Cheeger set need not be unique.
In fact, h_{1}(Ω) equals the first eigenvalue of the 1Laplacian. More precisely,
is the smallest λ≠ 0 for which there is a nontrivial solution u of (5), and such a u is of the form χ_{E} for a Cheeger set, up to a multiplicative factor, of course. Also, if λ_{1,p}(Ω) denotes the smallest nonzero eigenvalue of (4), then
We also have the lower bound
generalizing the original Cheeger bound for p = 2.
More generally, for any family of eigenvalues λ_{k,p}(Ω) of (4), \(\lim _{p\to 1^{+}}\lambda _{k,p}({{\varOmega }})\) is an eigenvalue of (5). The converse is not true, however; (5) may have more solutions than can be obtained as limits of solutions of (4).
The functional Du appears also in image denoising, in socalled TV models (where the acronym TV refers to the fact that Du(Ω) is the total variation of the measure Dudx) introduced in [36]. There, one wants to denoise a function \(f:{{\varOmega }} \to \mathbb {R}\) by smoothing it, and in the TV models, one wants to minimize a functional of the form
\({\int \limits }_{{{\varOmega }}} uf\) is the socalled fidelity term that controls the deviation of the denoised version u from the given data f. μ > 0 is a parameter that balances the smoothness and the fidelity term. Formally, a minimizer u has to satisfy an equation of the form
which is similar to (5). It turns out, however, that when such a model is applied to actual data, the performance is not so good, and it has been found preferable to modify (10) to what is called a nonlocal model in image processing [16]. In [19], such a model was derived from geometric considerations, and this may also provide some insight into the relation with the discrete models considered in this paper, we now recall the construction of that reference.
Let Ω be a domain in \(\mathbb {R}^{n}\) or some more abstract space, and \(\omega :{{\varOmega }} \times {{\varOmega }} \to \mathbb {R}\) a nonnegative, symmetric function. ω(x,y) can be interpreted as some kind of edge weight between the points x,y for any pair (x,y) ∈Ω×Ω. Here x,y can also stand for patches in the image, and in our setting, they could also be vertices in a graph (in which case the integrals below would become sums). We define the average \(\bar {\omega }:{{\varOmega }} \to \mathbb {R}\) of ω by
and assume that \(\bar {\omega }\) is positive almost everywhere. On a graph, while ω is an edge function, \(\bar {\omega }\) would be a vertex function, \(\bar {\omega }(x)\) being the degree of the vertex x with edge weights ω(x,y). We first use \(\bar {\omega }(x)\) and ω(x,y) to define the L^{2}norms for functions \(u:{{\varOmega }}\to \mathbb {R}\) and vector fields p, that is, \(p:{{\varOmega }}\times {{\varOmega }} \to \mathbb {R}\),
and the corresponding norms u and p.
The discrete derivative of a function (an image) \(u:{{\varOmega }}\to \mathbb {R}\) is defined by
Even though Du does not depend on ω, it is in some sense analogous to a gradient, as we shall see below. Its pointwise norm then is given
The divergence of a vector field \(p:{{\varOmega }} \times {{\varOmega }} \to \mathbb {R}\) is defined by
Note that, in contrast to Du for a function u, the divergence of a vector field depends on the weight ω. For \(u:{{\varOmega }}\to \mathbb {R}\) and \(p:{{\varOmega }}\times {{\varOmega }}\to \mathbb {R}\), we then have
the analog of (8).
With the vector field Du and the divergence operator div, we can define a Laplacian for functions
which in the case of a graph is the Laplacian we have been using. The nonlocal TV (or BV) functional of [19] then is
This leads to the nonlocal TV model
It should be of interest to explore such models on hypergraphs. That would offer the possibility to account not only for correlations between pairs, but also between selected larger sets of vertices, for instance three collinear ones.
Basic Notions on Hypergraphs
Definition 1.1
([38]) An oriented hypergraph is a pair Γ = (V,H) such that V is a finite set of vertices and H is a set such that every element h in H is a pair of disjoint elements (h_{in},h_{out}) (input and output) in \(\mathcal {P}(V)\setminus \{\emptyset \}\). The elements of H are called the oriented hyperedges. Changing the orientation of a hyperedge h means exchanging its input and output, leading to the pair (h_{out},h_{in}).
With a little abuse of notation, we shall see h as h_{in} ∪ h_{out}.
Definition 1.2
([33]) Given h ∈ H, we say that two vertices i and j are cooriented in h if they belong to the same orientation sets of h; we say that they are antioriented in h if they belong to different orientation sets of h.
Definition 1.3
Given i ∈ V, we say that two hyperedges h and \(h^{\prime }\) contain i with the same orientation if \(i\in (h_{in}\cap h^{\prime }_{in})\cup (h_{out}\cap h^{\prime }_{out})\); we say that they contain i with opposite orientation if \(i\in (h_{in}\cap h^{\prime }_{out})\cup (h_{out}\cap h^{\prime }_{in})\).
Definition 1.4
([31]) The degree of a vertex i is
and the cardinality of a hyperedge h is
From now on, we fix such an oriented hypergraph Γ = (V,H) on n vertices 1,…,n and m hyperedges h_{1},…,h_{m}. We assume that there are no vertices of degree zero. We denote by C(V ) the space of functions \(f:V\rightarrow \mathbb {R}\) and we denote by C(H) the space of functions \(\gamma :H\rightarrow \mathbb {R}\).
pLaplacians for p > 1
Definition 2.1
Given \(p\in \mathbb {R}_{> 1}\), the (normalized) vertex pLaplacian is Δ_{p} : C(V ) → C(V ), where
where
We define its eigenvalue problem as
We say that a nonzero function f and real number λ satisfying (11) are an eigenfunction and the corresponding eigenvalue for Δ_{p}.
Remark 2.2
Definition 2.1 generalizes both the graph pLaplacian and the normalized Laplacian defined in [21] for hypergraphs, which corresponds to the case p = 2.
Remark 2.3
The pLaplace operators for classical hypergraphs that were introduced in [18] coincide with the vertex pLaplacians that we introduced here in the case of simple graphs, but not in the more general case of hypergraphs. In fact, the Laplacians in [18] are related to the Lovász extension, while the operators that we consider here are defined via the incidence matrix. Also, the corresponding functionals for the pLaplacians in [18] are of the form
and these are nonsmooth in general, even for p > 1. In our case, the corresponding functionals are of the form
and these are smooth for p > 1.
Definition 2.4
Given \(p\in \mathbb {R}_{> 1}\), the (normalized) hyperedge pLaplacian is \({{\Delta }^{H}_{p}}:C(H)\to C(H)\), where
where
We define its eigenvalue problem as
We say that a nonzero function γ and a real number λ satisfying (12) are an eigenfunction and the corresponding eigenvalue for \({{\Delta }^{H}_{p}}\).
Remark 2.5
For p = 2, Definition 2.4 coincides with the one in [21]. Also, as we shall see, while it is known that the nonzero eigenvalues of Δ_{p} and \({{\Delta }_{p}^{H}}\) coincide for p = 2, this is no longer true for a general p.
Generalized Minmax Principle
For p = 2, the Courant–Fischer–Weyl minmax principle can be applied in order to have a characterizations of the eigenvalues of Δ_{2} and \({{\Delta }_{2}^{H}}\) in terms of the Rayleigh Quotients of the functions f ∈ C(V ) and γ ∈ C(H), respectively, as shown in [21]. In this section we prove that, for p > 1, a generalized version of the minmax principle can be applied in order to know more about the eigenvalues of Δ_{p} and \({{\Delta }_{p}^{H}}\). Similar results are already known for graphs, as shown for instance in [40]. Before stating the main results of this section, we define the generalized Rayleigh Quotients for functions on the vertex set and for functions on the hyperedge set.
Definition 2.6
Let \(p\in \mathbb {R}_{\geq 1}\). Given f ∈ C(V ), its generalized Rayleigh Quotient is
Analogously, the generalized Rayleigh Quotient of γ ∈ C(H) is
Remark 2.7
It is clear from the definition of RQ_{p}(f) and RQ_{p}(γ) that
and
Theorem 2.8
Let \(p\in \mathbb {R}_{>1}\). f ∈ C(V ) ∖{0} is an eigenfunction for Δ_{p} with corresponding eigenvalue λ if and only if
Similarly, γ ∈ C(H) ∖{0} is an eigenfunction for \({{\Delta }^{H}_{p}}\) with corresponding eigenvalue μ if and only if
Proof
For \(p\in \mathbb {R}_{>1}\), RQ_{p} is differentiable on \(\mathbb {R}^{n}\setminus 0\). Also,
where we have used the fact that
Hence,
Furthermore, if \(f^{\prime }\) is an eigenfunction corresponding to any eigenvalue λ, then Δ_{p}f = λf^{p− 2}f, therefore
which can be simplified as
This proves the claim for Δ_{p}. The case of \({{\Delta }^{H}_{p}}\) is similar. We have that
Therefore,
This proves the first implication for \({{\Delta }^{H}_{p}}\). The inverse implication is analogous to the case of Δ_{p}. □
Corollary 2.9
For all p > 1,
is the smallest (resp. largest) eigenvalue of Δ_{p}, and f realizing (13) is a corresponding eigenfunction.
Analogously,
is the smallest (resp. largest) eigenvalue of \({{\Delta }^{H}_{p}}\), and γ realizing (14) is a corresponding eigenfunction.
Proof
By Fermat’s theorem, if f≠ 0 minimizes or maximizes RQ_{p} over \(\mathbb {R}^{n}\setminus 0\), then ∇RQ_{p}(f) = 0. The claim for Δ_{p} then follows by Theorem 2.8, and the case of \({{\Delta }^{H}_{p}}\) is analogous. □
We now give a preliminary definition, before stating the generalized minmax principle.
Definition 2.10
For a centrally symmetric set S in \(\mathbb {R}^{n}\), its Krasnoselskii \(\mathbb {Z}_{2}\) genus is defined as
For each k ≥ 1, we let \(\text {Gen}_{k}:=\{ S\subset \mathbb {R}^{n}: S\text { centrally symmetric with } \text {gen}(S)\ge k\}\).
Remark 2.11
From the above definition we get an inclusion chain
Therefore, the Krasnoselskii \(\mathbb {Z}_{2}\) genus gives a graded index of the family of all centrally symmetric sets with center at 0 in R^{n}, which generalizes the (linear) dimension of subspaces.
Theorem 2.12
(Generalized minmax principle) Let \(p\in \mathbb {R}_{>1}\). For k = 1,…,n, the constants
are eigenvalues of Δ_{p}. They satisfy
and, if λ = λ_{k+ 1} = ⋯ = λ_{k+l} for 0 ≤ k < k + l ≤ n, then
The same holds for the constants
that are eigenvalues of \({{\Delta }^{H}_{p}}\).
Proof
By Theorem 2.8, in order to prove the claim for Δ_{p} it suffices to show that λ_{k}(Δ_{p}) defined in (15) is a critical value of RQ_{p}. Let
be the pnorm with weights given by the degrees, and let
Then, \(\text {RQ}_{p}(f)=E_{p}(\frac {f}{\f\_{p}})\). Now, consider the l^{p}sphere \(S_{p}=\{f\in \mathbb {R}^{n}:\f\_{p}=1\}\). We have that
where R_{+}S := {cg : g ∈ S,c > 0}. Therefore, it can be verified that
From the Liusternik–Schnirelmann Theorem applied to the smooth function E_{p} restricted to the smooth l^{p}sphere S_{p} it follows that such a minmax quantity must be an eigenvalue of E_{p} on S_{p}. This proves the claim for Δ_{p}. The case of \({{\Delta }_{p}^{H}}\) is similar, if we consider
and \(S_{p}:=\{\gamma \in \mathbb {R}^{m}:\\gamma \_{p}=1\}\). □
Remark 2.13
For the case of p = 2, a linear subspace X in \(\mathbb {R}^{n}\) with \(\dim X=k\) satisfies gen(X) = k and by considering the subfamily
we have
This coincides with the Courant–Fischer–Weyl minmax principle. On the other hand, for p > 1, we only know that
In particular, while for p = 2 we know that the n eigenvalues of Δ_{p} (resp. the m eigenvalues of \({{\Delta }_{p}^{H}}\) appearing in Theorem 2.12) are all the eigenvalues of Δ_{p} (resp. \({{\Delta }_{p}^{H}}\)), we don’t know whether Δ_{p} and \({{\Delta }_{p}^{H}}\) have also more eigenvalues, for p≠ 2. This is still an open question also for the graph case. In other words, we don’t know whether all eigenvalues of Δ_{p} and \({{\Delta }_{p}^{H}}\) can be written in the minmax Rayleigh Quotient form.
Conjecture 1
For \(1<p<\infty \), all eigenvalues of Δ_{p} are minmax eigenvalues.
We formulate this conjecture, because for the pLaplacian on domains and manifolds as well as on graphs, it is an open problem whether all the eigenvalues of the pLaplacian are of the minmax form (see [3, 6, 13] and [40]). Thus, as far as we know, Conjecture 1 is open in both the continuous and the discrete setting.
Throughout the paper, given p > 1 we shall denote by
the eigenvalues of Δ_{p} and \({{\Delta }^{H}_{p}}\), respectively, which are described in Theorem 2.12. We shall call them the minmax eigenvalues. Note that, although we cannot say a priori whether these are all the eigenvalues of the pLaplacians, in view of Corollary 2.9 we can always say that
1Laplacians
In this section we generalize the well known 1Laplacian for graphs [7, 8, 17] to the case of hypergraphs.
Definition 3.1
The 1Laplacian is the setvalued operator such that, given f ∈ C(V ),
where e_{1},…,e_{n} is the orthonormal basis of \(\mathbb {R}^{n}\) and
Analogously, the hyperedge 1Laplacian for functions γ ∈ C(H) is
where \(\mathbf {e}_{h_{1}},\ldots ,\mathbf {e}_{h_{m}}\) is the orthonormal basis of \(\mathbb {R}^{m}\).
For any f ∈ C(V ), Δ_{1}f is a compact convex set in \(C(V)\cong \mathbb {R}^{n}\), as well as
Remark 3.2
The 1Laplacian is the limit of the pLaplacian with respect to the setvalued upper limit, i.e.
where \(\mathbb {B}_{\delta }(f)\) is the ball with radius δ and center f. In other words, Δ_{1}f is the set of limit points of \({\Delta }_{p}f^{\prime }\) when p → 1 and \(f^{\prime }\to f\). On the one hand, if f is such that \({\sum }_{i\in h_{in}}f(i)\ne {\sum }_{i\in h_{out}}f(i)\) for all h ∈ H, then \({\Delta }_{1}f=\lim _{p\to 1^{+}}{\Delta }_{p}f\) in the classical sense. On the other hand, for a general f ∈ C(V ), the limit may not exist. To some extent, the setvalued upper limit ensures the upper semicontinuity of the family of pLaplacians, that is, the setvalued mapping \([1,\infty )\times C(V)\ni (p,f)\mapsto {\Delta }_{p}f\in C(V)\) is upper semicontinuous.
Definition 3.3
The eigenvalue problem of Δ_{1} is to find the eigenpair (λ,f) such that
or equivalently, in terms of Minkowski summation,
In coordinate form it means that there exist
with z_{ih} = o_{h}(i,j)z_{jh} for i,j ∈ h, and z_{i} ∈Sgn(f(i)) such that
Remark 3.4
A shorter coordinate form of the eigenvalue problem for the 1Laplacian is
Observe also that \(({\sum \limits }_{i\in h_{in}}f(i){\sum \limits }_{i\in h_{out}}f(i))z_{h}={\sum \limits }_{i\in h_{in}}f(i){\sum \limits }_{i\in h_{out}}f(i)\) and f(i)z_{i} = f(i), for all h ∈ H and for all i ∈ V.
The eigenvalue problem of \({{\Delta }_{1}^{H}}\) can be defined in an analogous way. In particular, all results shown in this section for Δ_{1} also hold for \({{\Delta }_{1}^{H}}\). Without loss of generality, we only prove them for Δ_{1}.
Definition 3.5
For the generalized Rayleigh QuotientRQ_{1} (cf. Definition 2.6), its Clarke derivative at f ∈ C(V ) is
This is a compact convex set in C(V ).
Remark 3.6
Clarke introduced such a derivative for locally Lipschitz functions, in the field of nonsmooth optimization [9, 10]. Clearly, RQ_{1} is not smooth, but it is piecewise smooth (therefore locally Lipschitz) on \(\mathbb {R}^{n}\setminus 0\). Hence, the Clarke derivative for RQ_{1} is well defined. Also, since the Clarke derivative coincides with the usual derivative for smooth functions, we choose to denote it by ∇ also for locally Lipschitz functions.
Definition 3.7
Given f ∈ C(V ), let
Proposition 3.8
For all i ∈ V,
Proof
Note that the Clarke derivative of the function \(\mathbb {R}\ni t\mapsto t\) is Sgn(t). Hence, by the chain rule in nonsmooth analysis, for \(a_{1},\ldots ,a_{k}\in \mathbb {R}\),
Finally, applying the additivity of Clarke’s derivative, we derive the desired identities. □
Theorem 3.9
(Minmax principle for the 1Laplacian) If f is a critical point of the function RQ_{1}, i.e. 0 ∈∇RQ_{1}(f), then f is an eigenfunction and RQ_{1}(f) is the corresponding eigenvalue of Δ_{1}. A function f ∈ C(V ) ∖ 0 is a maximum (resp. minimum) eigenfunction of Δ_{1} if and only if it is a maximizer (resp. minimizer) of RQ_{1}; λ is the largest (resp. smallest) eigenvalue of Δ_{1} if and only if it is the maximum (resp. minimum) value of RQ_{1}.
Also, the constants
are eigenvalues of Δ_{1}. Furthermore, \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})= \lambda _{k}({\Delta }_{1})\), and any limit point of {f_{k,p}}_{p> 1} is an eigenfunction of Δ_{1} w.r.t. λ_{k}(Δ_{1}), where f_{k,p} is an eigenfunction^{Footnote 1} of λ_{k}(Δ_{p}), ∀k = 1,…,n. Besides, if \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})=\lim _{p\to 1^{+}} \lambda _{k+l}({\Delta }_{p})\) for some \(k,l\in \mathbb {N}_{+}\), then λ_{k}(Δ_{1}) has the multiplicity at least l + 1.
Proof
The proof is based on the theory of Clarke derivative, established in [10].
Let f be a critical point of the function RQ_{1}. By the chain rule for the Clarke derivative,
Therefore, f is an eigenfunction of Δ_{1}, and RQ_{1}(f) is the corresponding eigenvalue. Also, again by the basic results on Clarke derivative, if f is a maximizer (minimizer) of RQ_{1}, then 0 ∈∇RQ_{1}(f). Hence, 0 ∈Δ_{1}f −RQ_{1}(f)Sgn(f). Thus, f is an eigenfunction, and RQ_{1}(f) is a corresponding eigenvalue.
Now, if f is an eigenfunction corresponding to an eigenvalue λ, i.e. 0 ∈Δ_{1}f − λ Sgn(f) or equivalently
then by the Euler identity for onehomogeneous Lipschitz functions,
Therefore, by (17), we get that 0 = E_{1}(f) − λ∥f∥_{1}, which implies λ = RQ_{1}(f). Hence, the maximum (resp. the minimum) of RQ_{1} is the largest (resp. smallest) eigenvalue of Δ_{1}.
The minmax principle (16) is a consequence of the nonsmooth version of the Liusternik–Schnirelmann Theorem [11], and thus we omit the details of the proof.
The convergence property \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})= \lambda _{k}({\Delta }_{1})\) is a consequence of the result on Gammaconvergence of minimax values [12].
Now, without loss of generality, we may assume that f_{k,p} → f_{∗}, p → 1^{+}. Then, according to Remark 3.2, \(\lim _{p\to 1^{+}}{\Delta }_{p}f_{k,p}\in {\Delta }_{1} f_{\ast }\). Similarly, f_{k,p}(i)^{p− 2}f_{k,p}(i) →sign(f_{∗}(i)) as p tends to 1^{+}. By taking p → 1^{+} in the equality
we get
which means that f_{∗} is an eigenfunction of Δ_{1}.
The condition \(\lim _{p\to 1^{+}} \lambda _{k}({\Delta }_{p})=\lim _{p\to 1^{+}} \lambda _{k+l}({\Delta }_{p})\) implies λ_{k}(Δ_{1}) = λ_{k+ 1}(Δ_{1}) = ⋯ = λ_{k+l}(Δ_{1}), which derives that λ_{k}(Δ_{1}) has the multiplicity at least (l + 1) according to the Liusternik–Schnirelmann Theory. This completes the proof. □
Analogously to the case of p > 1, also for p = 1 we shall denote by
the eigenvalues of Δ_{1} that are described in Theorem 2.12 and the analogous eigenvalues of \({{\Delta }_{1}^{H}}\) that can be obtained in the same way. Also in this case, as well as for p > 1, we can always say that
Remark 3.10
In contrast to the case of the pLaplacian for p > 1, the converse of Theorem 3.9 is not true, that is, there exist eigenfunctions f of Δ_{1} that are not a critical points of RQ_{1}. However, showing this requires a long argument that we bring forward in [23]. In [23] we also show, furthermore, that Conjecture 1 cannot hold for Δ_{1}. (We had already noted in Section 1.1 that this is also a subtle issue in the continuous case).
Smallest and Largest Eigenvalues
In [31], it has been proved that
Hence, we can characterize the maximal eigenvalue of \({{\Delta }^{H}_{1}}\) in virtue of a combinatorial quantity. In this section, we investigate further properties of both the largest and the smallest eigenvalues of the pLaplacians, for general p.
Lemma 4.1
For all p, λ_{1} ≤ 1 ≤ λ_{n}.
Proof
Let \(\tilde {f}:V\rightarrow \mathbb {R}\) that is 1 on a fixed vertex and 0 on all other vertices. Then, for all p, \(\text {RQ}_{p}(\tilde {f})=1\). Therefore,
□
Lemma 4.2
For p = 1 and for all hypergraphs, λ_{n} = 1.
Proof
We generalize the proof of [22, Lemma 8]. Let \(\hat {f}:V\rightarrow \mathbb {R}\) be a maximizer of
and assume, without loss of generality, that \({\sum \limits }_{i\in V}\deg (i)\hat {f}(i)=1\). Then,
The inverse inequality follows by Lemma 4.1. □
Remark 4.3
If we compare (18) and Lemma 4.2 we can see that, while for p = 2, i.e. in the case of the usual hypergraph Laplacian, μ_{m} = λ_{n} and μ_{1} = λ_{1}, this is not necessarily true for all p.
Lemma 4.4
For all p,
Proof
Let \(\tilde {\gamma }:H\rightarrow \mathbb {R}\) that is 1 on a fixed hyperedge h and 0 on all other hyperedges. Then, for all p,
Therefore,
Since this is true for all h, this proves the claim. □
Nodal Domain Theorems
In [33], the authors prove two nodal domain theorems for Δ_{2}. In this section, we establish similar results for Δ_{p}, for all p ≥ 1. Before, we recall the definitions of nodal domains for oriented hypergraphs. We refer the reader to [4] for nodal domain theorems on graphs.
Definition 5.1
([33]) Given a function \(f:V\to \mathbb {R}\), we let supp(f) := {i ∈ V : f(i)≠ 0} be the support set of f. A nodal domain of f is a connected component of
Similarly, we let supp_{±}(f) := {i ∈ V : ±f(i) > 0}. A positive nodal domain of f is a connected component of
A negative nodal domain of f is a connected component of H ∩supp_{−}(f).
Signless Nodal Domain
Definition 5.2
We say an eigenvalue λ of Δ_{p} has multiplicity r if gen{eigenfunctions w.r.t. λ} = r.
Theorem 5.3
If f is an eigenfunction of the kth minmax eigenvalue λ_{k}(Δ_{p}) and this has multiplicity r, then the number of nodal domains of f is smaller than or equal to k + r − 1.
Proof
Suppose the contrary, that is, f is an eigenfunction of λ_{k} with multiplicity r, and f has at least k + r nodal domains which are denoted by V_{1},…,V_{k+r}. For simplicity, we assume that
Consider a linear functionspace X spanned by \(f_{V_{1}},\ldots ,f_{V_{k+r}}\), where the restriction \(f_{V_{i}}\) is defined by
Since V_{1},…,V_{k+r} are pairwise disjoint, \(\dim X=k+r\). Given g ∈ X ∖ 0, there exists (t_{1},…,t_{k+r})≠0 such that
It is clear that \(\g\_{p}^{p}={\sum }_{i=1}^{k+r} t_{i}^{p}\f_{V_{i}}\_{p}^{p}\). By the definition of nodal domain, each hyperedge h intersects with at most one V_{i} ∈{V_{1},…,V_{k+r}}, which implies that \(E_{p}(g)={\sum }_{i=1}^{k+r} t_{i}^{p} E_{p}(f_{V_{i}})\). Finally, we note that for p > 1,
which implies that \(E_{p}(f_{V_{l}})=\lambda _{k}\f_{V_{l}}\_{p}^{p}\). For the case of p = 1, we have
in which the parameters \(z_{h}\in \text {Sgn}({\sum \limits }_{i\in h_{in}}f(i){\sum \limits }_{j\in h_{out}}f(j))\) and z_{i} ∈Sgn(f(i)) (cf. Remark 3.4).
Therefore,
By the minmax principle for Δ_{p},
which leads to a contradiction. □
Positive and Negative Nodal Domain Theorem
In this section, we show a new Courant nodal domain theorem for oriented hypergraphs with only inputs. Note that Theorem 5.3 does not hold if we replace “nodal domains” by “positive and negative nodal domains”. In fact, for the connected hypergraph Γ_{k} := (V,E_{k}) with V := {1,…,n} and
in which we suppose that there are only inputs, the number of positive and negative nodal domains of the first eigenfunction w.r.t. λ_{1} = 0 is n.
Theorem 5.4
Let Γ = (V,H) be an oriented hypergraph with only inputs. If f is an eigenfunction of the kth minmax eigenvalue λ_{k} and this has multiplicity r, then the number of nodal domains of f is smaller than or equal to n − k + r.
Proof
Suppose the contrary, that is, f is an eigenfunction of λ_{k} with multiplicity r, and f has at least n − k + r + 1 nodal domains which are denoted by V_{1},…,V_{n−k+r+ 1}. Consider a linear functionspace X spanned by \(f_{V_{1}},\ldots ,f_{V_{nk+r+1}}\), where the restriction \(f_{V_{i}}\) is defined by
Since V_{1},…,V_{n−k+r+ 1} are pairwise disjoint, \(\dim X=nk+r+1\). For g ∈ X ∖ 0, there exists (t_{1},…,t_{n−k+r+ 1})≠0 such that \(g={\sum \limits }_{i=1}^{nk+r+1} t_{i} f_{V_{i}}\). By the definition of positive and negative nodal domains, each hyperedge h intersects at most one positive nodal domain and at most one negative nodal domain. Thus, for \(l\ne l^{\prime }\) and h ∈ H, \(\left ({\sum \limits }_{i\in h_{in}}f_{V_{l}}(i)\right )\cdot \left ({\sum \limits }_{i\in h_{in}}f_{V_{l^{\prime }}}(i)\right )\le 0\).
Now, with a little abuse of notation we let h = h_{in}. For p > 1, we have that
where the inequality is deduced by taking \(A={\sum \limits }_{i\in h}f_{V_{l}}(i)\) and \(B={\sum \limits }_{i\in h}f_{V_{l}^{\prime }}(i)\) in the following lemma. Similarly, for p = 1 we have
where \(z_{h}\in \text {Sgn}({\sum \limits }_{i\in h}f(i))\) and z_{i} ∈Sgn(f(i)).
Lemma 5.5
Let p ≥ 1, and let \(t,s,A,B\in \mathbb {R}\) with AB ≤ 0. Then,
In the particular case of p = 1, we further have tA + sB≥ (tA + sB)z, ∀z ∈Sgn(A + B).
By Lemma 5.5, it follows that RQ(g) ≥ λ_{k}.
By the intersection property of \(\mathbb {Z}_{2}\)genus, \(X^{\prime }\cap X\setminus \{0\}\ne \emptyset \) for any \(X^{\prime }\in \text {Gen}_{kr}\). Therefore,
Together with λ_{k−r} ≤… ≤ λ_{k− 1} ≤ λ_{k}, this implies that λ_{k−r} = ⋯ = λ_{k− 1} = λ_{k}, meaning that the multiplicity of λ_{k} is at least r + 1, which leads to a contradiction. □
It is only left to prove Lemma 5.5.
Proof
of Lemma 5.5 Without loss of generality, we may assume that A > 0 > B and \(A>B^{\prime }:=B\). In order to prove (19), it suffices to show that
that is,
By the convexity of the function t↦t^{p}, we have
which proves (19). Now, in order to prove the stronger inequality for p = 1, since z = A + B(A + B)^{− 1} if A + B≠ 0, it suffices to focus on the case of A + B = 0. In this case, by \(ts\ge \max \limits \{ts,st\}\), we have t − s≥ (t−s)z for any z ∈ [− 1,1]. Therefore, tA + sB = At − s≥ A(t−s)z = (tA + sB)z. The proof is completed. □
Smallest Nonzero Eigenvalue
In this section, we discuss the smallest nonzero eigenvalue \(\lambda _{\min \limits }\) of Δ_{p}, for p ≥ 1, as a continuation of Sections 5 and 6 in [33], which are focused on the easier study of \(\lambda _{\min \limits }\) for the 2Laplacian. As in [33], we let \(\mathcal {I}^{h}:V\to \mathbb {R}\) and \(\mathcal {I}_{i}:H\to \mathbb {R}\) be defined by
Theorem 6.1
For p ≥ 1,
where \(d:=\dim \text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\) and \(d^{\prime }:=\dim \text {span}(\mathcal {I}_{i}:i\in V)^{\bot }\).
Remark 6.2
(20) above generalizes Equation (5) in [33]. In fact, for p = 2, by letting
we have that \(\bar f:=f\bar g\) is orthogonal to \(\text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\) with respect to the weighted scalar product \((f^{\prime },g^{\prime }):={\sum \limits }_{i\in V}\deg (i) f^{\prime }(i) g^{\prime }(i)\). Therefore,
and this coincides with Equation (5) in [33, Lemma 6.1].
Proof
of Theorem 6.1 Let \(X:=\text {span}(\mathcal {I}^{h}:h\in H)^{\bot }\). We shall prove that
If d = 0, the claim is straightforward because in this case X = 0, \(X^{\bot }=\mathbb {R}^{n}\) and
Now, assume d ≥ 1. Since X ∈Gen_{d} and RQ_{p}(f) = 0 for all f ∈ X, we have λ_{1} = ⋯ = λ_{d} = 0. From the local compactness of X^{⊥}, the zerohomogeneity of RQ_{p}(f) and the fact that E_{p}(f) > 0 ∀X^{⊥}∖ X, it follows that \(\tilde {\lambda }>0\). For the case when p > 1, we still need to prove the following three steps.

(I)
\(\lambda _{d+1}\ge \tilde {\lambda }\):
Observe that \(\dim X^{\bot }=nd\). Since the l^{p}norm is smooth and strictly convex for p > 1, for each f there is a unique g_{f} ∈ X such that
$$ \sum\limits_{i\in V}\deg(i)f(i)g_{f}(i)^{p}=\min_{g\in X}\sum\limits_{i\in V}\deg(i)f(i)g(i)^{p} $$and the map φ : f↦f − g_{f} is smooth. Moreover, \(\varphi _{X^{\bot }}:X^{\bot }\to \varphi (X^{\bot })\) is bicontinuous (i.e., homeomorphism). Clearly, φ is such that − f↦ − f − g_{−f} = −f + g_{f}, therefore φ is odd. Hence, if we let f^{⊥} be the projection of f to X^{⊥}, we get an odd homeomorphism ψ : R^{n} → R^{n}, \(f\mapsto fg_{f^{\bot }}\).
Thus, because of the homotopy property of the \(\mathbb {Z}_{2}\)genus, for any S ∈Gen_{d+ 1} we have that the image \(\psi ^{1}(S)\in \text {Gen}_{d+1}\). Moreover, by the intersection property of the \(\mathbb {Z}_{2}\)genus, ψ^{− 1}(S) ∩ X^{⊥}≠∅, which implies S ∩ ψ(X^{⊥}) = ψ(ψ^{− 1}(S) ∩ X^{⊥})≠∅. Also note that ψ(X^{⊥}) = φ(X^{⊥}). Hence, for any S ∈Gen_{d+ 1},
$$ \sup_{f\in S}\text{RQ}_{p}(f)\ge \inf_{f\in \varphi(X^{\bot})}\text{RQ}_{p}(f)=\tilde{\lambda}. $$This proves that \(\lambda _{d+1}\ge \tilde {\lambda }\).

(II)
\(\lambda _{d+1}\le \tilde {\lambda }\):
For any f ∈ X^{⊥}∖ X, let \(X^{\prime }:=\text {span}(X\cup \{f\})\). Then, \(X^{\prime }\in \text {Gen}_{d+1}\) and
$$ \begin{array}{@{}rcl@{}} \lambda_{d+1}\le \sup_{f^{\prime}\in X^{\prime}}\text{RQ}_{p}(f^{\prime})&=&\sup_{g\in X}\frac{E_{p}(f)}{{\sum\limits}_{i\in V}\deg(i)f(i)+g(i)^{p}}\\ &=&\frac{{\sum\limits}_{h\in H}\langle\mathcal{I}^{h},f\rangle^{p}}{\min_{g\in X}{\sum\limits}_{i\in V}\deg(i)f(i)g(i)^{p}}. \end{array} $$Since this holds for all f ∈ X^{⊥}, we derive that \(\lambda _{d+1}\le \tilde {\lambda }\).

(III)
There is no positive eigenvalue between λ_{1} = 0 and λ_{d+ 1} > 0:
Suppose the contrary and assume that f is an eigenfunction with eigenvalue \(\text {RQ}_{p}(f)\in (0,\tilde {\lambda })\). Then ∇RQ_{p}(f) = 0. Consider the function t↦RQ_{p}(f − tg_{f}). On the one hand,
$$ \frac{d}{dt} _{t=0}\text{RQ}_{p}(ftg_{f})=\langle\nabla \text{RQ}_{p}(f),g_{f}\rangle=0. $$On the other hand, E_{p}(f − tg_{f}) = E_{p}(f) and the function
$$ t\mapsto \sum\limits_{i\in V}\deg(i)f(i)tg_{f}(i)^{p} $$(21)is a strictly convex function with minimum at t = 1. This implies that (21) is strictly decreasing and convex on (− 1,1), thus
$$ \frac{d}{dt} _{t=0} \sum\limits_{i\in V}\deg(i)f(i)tg_{f}(i)^{p}<0. $$Hence, we get \(\frac {d}{dt} _{t=0}\text {RQ}_{p}(ftg_{f})>0\), which leads to a contradiction.
This proves the case p > 1. Finally, we complete the proof of the case p = 1. Since
we only need to prove that (III) holds also for Δ_{1}. Suppose the contrary and let \(\hat {f}\) be an eigenfunction corresponding to an eigenvalue \(\lambda \in (0,\tilde {\lambda })\). Then, \(0\in \nabla E_{1}(\hat {f})\lambda \nabla \\hat {f}\_{1}\). Now, consider a flow near \(\hat {f}\) defined by η(f,t) := f − tg_{f}, where t ≥ 0 and \(f\in \mathbb {B}_{\delta }(f)\) for sufficiently small δ > 0. Note that
is an increasing function of t, since ∥f − tg_{f}∥_{1} < ∥f∥_{1} and ∥⋅∥_{1} is convex. Consequently, by the theory of weak slope [11], we have that \(0\not \in \nabla (E_{1}(\hat {f})\lambda \\hat {f}\_{1})=\nabla E_{1}(\hat {f})\lambda \nabla \\hat {f}\_{1}\), which is a contradiction. This completes the proof. □
We shall now discuss some consequences of Theorem 6.1.
Corollary 6.3
For p ≥ 1,
Proof
It follows immediately from Theorem 6.1. □
Corollary 6.4
For p ≥ 1, let \(\lambda _{p,\min \limits }\) be the smallest positive eigenvalue of the pLaplacian. Then,
Proof
For p ≤ 2, it is known that \({\sum }_{h\in H}\langle \mathcal {I}^{h},f\rangle ^{p}\ge \left ({\sum }_{h\in H}\langle \mathcal {I}^{h},f\rangle ^{2}\right )^{p/2}\) and
Thus, applying Corollary 6.3, we have
The case of p ≥ 2 is similar. □
Remark 6.5
We further have
where \(\hat {\lambda }_{p,\min \limits }=\lambda _{p,\min \limits }^{\frac 1p}\). This implies that
thus \(\hat {\lambda }_{p,\min \limits }\) is a continuous function of \(p\in [1,\infty )\) and the limit \(\lim _{p\to +\infty }\hat {\lambda }_{p,\min \limits }\in [0,n]\) exists.
Remark 6.6
For p ≥ 1, let
By Corollary 6.3 and Remark 6.5, we get that
which can be seen as a dual inequality with respect to the one in Corollary 6.3. Note that the constant C_{p} is such that C_{2} = 1 for all oriented hypergraphs and C_{1} = 2 in the graph case.
Vertex Partition Problems
In [33], two vertex partition problems for oriented hypergraphs have been discussed: the kcoloring, that is, a function \(f:V\rightarrow \{1,\ldots ,k\}\) such that f(i)≠f(j) for all i≠j ∈ h and for all h ∈ H, and the generalized Cheeger problem. In this section we discuss more partition problems and we also define a new coloring number that takes signs into account as well.
In [33], the generalized Cheeger constant is defined as
where, given \(\emptyset \neq S\subseteq V\),
\(\overline {S}:=V\setminus S\) and
We generalize e(S) by letting, for p ≥ 1 and \(\emptyset \neq S\subseteq V\),
Remark 7.1
For a graph, e_{p}(S) = e(S) = ∂(S) is the number of edges between S and \(\overline {S}\), for all p. It measures, therefore, the flow between S and \(\overline {S}\). More generally, we can say that computing e_{p}(S) (as well as VolS) means deleting all vertices in \(\overline {S}\), in the sense of [33, Definition 2.20], and then computing e_{p} (respectively, the volume) on the vertex set of the subhypergraph obtained. Furthermore, when e_{p} is computed on the vertex set,
where the first inequality is an equality if and only if #h_{in} = #h_{out} for each hyperedge, and the second one is an equality if and only if there are either only inputs or only outputs. Hence, we could see the case e_{p}(V ) = 0 as a balance condition. Having #h_{in} = #h_{out} means that what comes in is the same as what goes out. Hence, also in the general case we can say that e_{p} measures a flow.
kcut Problems
We now generalize the balanced minimum kcut problem and the max kcut problem, known for graphs [15, 35], to the case of hypergraphs.
Definition 7.2
Given k ∈{2,…,n}, the balanced minimum kcut is
The maximum kcut is
Lemma 7.3
For each \(\emptyset \neq S\subseteq V\) and for each p ≥ 1,
Therefore, in particular, for each k ∈{2,…,n}
and
Proof
Let f ∈ C(V ) be 1 on S and 0 on \(\bar {S}\). Then,
The second claim follows by applying the first one to all the V_{i}’s. □
Signed Coloring Number
We now introduce the new notion of signed coloring number, that takes into account also the input/output structure of the hypergraph. We denote by χ(Γ) the coloring number defined in [33].
Definition 7.4
A signed kcoloring of the vertices is a function f : V →{1,…,k} such that, for all h ∈ H, f(i)≠f(j) if i and j are antioriented in h. The signed coloring number of Γ, denoted χ_{sign}(Γ) is the minimal k such that there exists a signed kcoloring.
Remark 7.5
Note that χ_{sign}(Γ) ≤ χ(Γ). Also, χ_{sign} ≤ 2 if and only if Γ is bipartite.
Applying Lemma 7.3 to the signed coloring number, we get the following corollary.
Corollary 7.6
Let χ_{sign} := χ_{sign}(Γ) and let \(V_{1},\ldots ,V_{\chi _{\text {sign}}}\) be the corresponding coloring classes. For each p ≥ 1,
Also, the upper bound in (22) shrinks to an equality for p = 1.
Proof
The first fact follows from Lemma 7.3 since, by definition of signed coloring number,
for each coloring class V_{i}.
In the particular case of p = 1, \({\sum \limits }_{h\in H}\#(V_{i}\cap h)=\text {Vol}(V_{i})\) for each i, therefore
Since we know, from Lemma 4.2, that \(\max \limits _{f}\text {RQ}_{1}(f)=1\), this proves that the upper bound in (22) shrinks to an equality for p = 1. □
Remark 7.7
The fact that the upper bound in (22) shrinks to an equality for p = 1 is particularly interesting because this is similar to what happens for the Cheeger constant h in the case of graphs, and for the Cheegerlike constant Q defined in [22] for graphs and generalized in [31] for hypergraphs. In fact, we have that:

1.
For connected graphs, the Cheeger constant h can be used for bounding λ_{2} in the case of Δ_{2} and, as shown in [8, 17], it is equal to λ_{2} in the case of Δ_{1}.

2.
For general hypergraphs, the Cheegerlike constant Q can be used for bounding λ_{n} in the case of Δ_{2} and \({{\Delta }^{H}_{2}}\), and it is equal to λ_{n} in the case of \({{\Delta }^{H}_{1}}\) (cf. [31]).

3.
In (22) we again have something similar, because the quantity that bounds λ_{n} from below for Δ_{p} equals λ_{n} for Δ_{1}.
Of course, the main difference between the last case and the first two is that h and Q are constants that are independent of p, while the quantity in (22) changes when p changes.
Remark 7.8
In the case of graphs, by definition of signed coloring number we have that #(V_{i} ∩ h) ∈{0,1} for each coloring class V_{i} and for each edge h. In particular,
and the constant appearing in (22) is equal to 1 for all p.
Multiway Partitioning
In this section we generalize the notion of kcut and we use it for bounding the smallest and largest eigenvalue of the classical Laplacian Δ_{2}.
Definition 7.9
A ktuple (S_{1},…,S_{k}) of sets \(S_{r}\subseteq V\) is called a (k,l)family if it covers S_{1} ∪… ∪ S_{k} exactly ltimes (i.e., each vertex i ∈ S_{1} ∪… ∪ S_{k} lies in exactly l sets \(S_{i_{1}},\ldots ,S_{i_{l}}\)). If, furthermore, S_{1} ∪… ∪ S_{k} = V, then we call the (k,l)family a (k,l)cover.
Remark 7.10
A (k,1)cover is a kpartition (or kcut).
Theorem 7.11
Let λ_{1} and λ_{n} be the smallest and the largest eigenvalue of the classical normalized Laplacian Δ_{2}, respectively. For any (k,l)family,
Proof
We first focus on the case that (S_{1},…,S_{k}) is a (k,l)cover. For r ∈{1,…,k}, define a function \(f_{r}:V\to \mathbb {R}\) by
Then
Consequently,
where we have used the equality
since each vertex in h is covered l times by S_{1},…,S_{k} (l ≤ k).
Also, \({\sum \limits }_{i\in V} \deg (i)f_{r}(i)^{2}=\text {Vol}(S_{r})t^{2}+(\text {Vol}(V)\text {Vol}(S_{r}))s^{2}\) and \({\sum }_{r=1}^{k}\text {Vol}(S_{r})=l\text {Vol}(V)\). Hence,
By the basic inequality
we have
We can verify that the minimum and maximum of the above quantity belong to
To see this, we make the following observations.

1.
For s = 0, the quantity η(t,s) is \(\frac {{\sum \limits }_{r=1}^{k}e_{r}(S_{r})}{l\text {Vol}(V)}\).

2.
For s≠ 0, the quantity η(t,s) is
$$ \begin{array}{@{}rcl@{}} &&\frac{(\frac ts1)^{2}\frac{1}{l}{\sum\limits}_{r=1}^{k} e(S_{r})+\left( \frac kl+2(\frac ts1)\right)e(V)}{\text{Vol}(V)\left( (\frac ts)^{2}1+\frac kl\right)}\\ &&=\frac{e(V)}{\text{Vol}(V)}+\frac{(\frac ts1)^{2}}{(\frac ts)^{2}1+\frac kl}\frac{\frac1l{\sum\limits}_{r=1}^{k} e(S_{r})e(V)}{\text{Vol}(V)}. \end{array} $$In fact, since \(\max \limits _{(t,s)\ne (0,0)}\frac {(\frac ts1)^{2}}{(\frac ts)^{2}1+\frac kl}=\frac {k}{kl}\) and \(\min \limits _{(t,s)\ne (0,0)}\frac {(\frac ts1)^{2}}{(\frac ts)^{2}1+\frac kl}=0\), k ≥ l,
$$ \left\{\max_{s\ne0,t\in\mathbb{R}}\eta(t,s), \min_{s\ne0,t\in\mathbb{R}}\eta(t,s)\right\} = \left\{\frac{e(V)}{\text{Vol}(V)}, \frac{e(V)}{\text{Vol}(V)} + \frac{k}{k  l}\frac{\frac1l{\sum\limits}_{r=1}^{k} e(S_{r})  e(V)}{\text{Vol}(V)}\right\}. $$
The proof of the claim is then completed by observing that
For a general (k,l)family, we can consider \(V^{\prime }:=S_{1}\cup \ldots \cup S_{k}\) and \(H^{\prime }:=\{h\cap V^{\prime }:h\in H\}\). Then, \({{\varGamma }}^{\prime }:=(V^{\prime },H^{\prime })\) is a subhypergraph of H restricted to \(V^{\prime }\). According to [33, Lemma 2.21], \(\lambda _{n}({{\varGamma }})\ge \lambda _{\max \limits }({{\varGamma }}^{\prime })\) and \(\lambda _{1}({{\varGamma }})\le \lambda _{1}({{\varGamma }}^{\prime })\). Applying the case of the (k,l)cover to \({{\varGamma }}^{\prime }\), we complete the proof. □
Corollary 7.12
Remark 7.13
For a graph, e(S) = ∂(S) and a (2,1)cover is a standard 2cut. Theorem 7.11 shows that
where \(2\max \limits _{S\subset V}\frac {\partial (S)}{\text {Vol}(V)}\) is the normalized maxcut ratio.
Also, Theorem 7.11 applied to (2,1)families for a graph implies that
where \(\max \limits _{S_{1}\cap S_{2}=\emptyset }\frac {2E(S_{1},S_{2})}{\text {Vol}(S_{1})+\text {Vol}(S_{2})}\) is exactly the dual Cheeger constant [2].
Interestingly, applying Theorem 7.11 to a (k,1)cover of a graph, we get
which relates to the max kcut problem.
General Partitions
Lemma 7.14
We have
and
Proof
Given a partition (V_{1},…,V_{k}) of V, given r ∈{1,…,k}, define a function \(f_{r}:V\to \mathbb {R}\) by
Then,
Consequently,
Also, we have
Now, note that
Next, we give a lower bound for (25). By the convexity of t↦t^{p}, we have
which implies \(BA^{p}\ge c^{p1}B^{p}(\frac {c}{1c})^{p1}A^{p}\). Thus,
Finally, the same method gives
□
Corollary 7.15
The following constants are smaller than or equal to λ_{n}(Δ_{p}):
Proof
Taking t = k − 1 and \(c=\frac 12\) in (23), we have the first.
Taking t = k − 1 and \(c=\frac 1k\) in (23), we get the middle one.
Taking t = 1 and \(c=\frac 12\) in (23), we obtain the last one. □
Corollary 7.16
The following constants are larger than or equal to λ_{1}(Δ_{p}):
Proof
Taking t = − 1 in (24), we get the first constant. Letting \(t\to \infty \) in (24), we obtain the second one. □
Corollary 7.17
Proof
Taking p = 1 in Lemma 7.14, we have
and
□
Hyperedge Partition Problems
While in the previous section we have discussed vertex partition problems and their relation to Δ_{p}, here we introduce the analogous hyperedge partition problems and their relations with \({{\Delta }_{p}^{H}}\). We start by defining, for each \(\emptyset \neq \hat {H}\subset H\), a quantity \(e_{p}(\hat {H})\) analogous to the quantity e_{p}(S) defined for subsets of vertices. Namely, we let
where, given i ∈ V, we let
We also define
Remark 8.1
Analogously to the vertex case, we can say that computing \(e_{p}(\hat {H})\) means deleting all hyperedges in \(H\setminus \hat {H}\) and then computing e_{p} on the hyperedge set of the subhypergraph obtained. It is therefore interesting to observe that, when e_{p} is computed on H,
where the first inequality is an equality if and only if each vertex is as often an input as an output, while the second one is an equality if and only if all vertices have the same sign for all hyperedges in which the graph is contained.
Furthermore, if the subhypergraph \(\hat {{{\varGamma }}}:=(V,\hat {H})\) of Γ is bipartite, without loss of generality we can assume that each vertex is either always an input or always an output for each hyperedge in which it is contained. In this case,
and, in particular, \(\eta _{p}(\hat {H})\) coincides with the quantity in [31, Definition 2.9]. Moreover, in the particular case when \(\hat {H}=\{h\}\) is given by one single hyperedge, then
We now generalize [31, Lemma 4.1] for all p.
Proposition 8.2
For all p, we have that
with equality if p = 1.
Proof
Let \(\gamma ^{\prime }:H\rightarrow \mathbb {R}\) be 1 on \(\hat {H}\) and 0 otherwise. Then, up to changing (without loss of generality) the orientations of the hyperedges,
Since the above inequality is true for all \(\hat {{{\varGamma }}}\), this proves the first claim.
If p = 1, then
where Q is the Cheegerlike quantity defined in [31]. By [31, Lemma 5.2], Q = μ_{m}. Therefore, the last inequalities shrink to equalities. □
Now, analogously to the vertex partition problems, we discuss hyperedge partition problems.
Definition 8.3
A khyperedge partition is a partition of the hyperedge set into k disjoint sets, H = H_{1} ⊔… ⊔ H_{k}. The balanced minimum khyperedge cut is
The maximum khyperedge cut is
The signed hyperedge coloring number, denoted \(\chi _{\text {sign}}^{H}\), is the minimal k for which there exists a function \(\gamma :H\rightarrow \{1,\ldots ,k\}\) such that, for all i ∈ V, \(\gamma (h)\neq \gamma (h^{\prime })\) if i is an input for h and an output for \(h^{\prime }\).
The following lemma is the analog of some results regarding vertex partition problems. It relates the balanced minimum khyperedge cut and the maximum khyperedge cut to the smallest and largest eigenvalues of \({{\Delta }_{p}^{H}}\), respectively.
Lemma 8.4
For each \(\emptyset \neq \hat {H}\subseteq H\) and for each p ≥ 1, \(\mu _{1}\leq \eta _{p}(\hat {H})\leq \mu _{m}\). Therefore, in particular, for each k ∈{2,…,n}
and
Proof
Given H_{i}, let γ ∈ C(H) be 1 on H_{i} and 0 otherwise. Then, RQ_{p}(γ) = η_{p}(H_{i}). Therefore, μ_{1} ≤ η_{p}(H_{i}) ≤ μ_{m}. The other claims follow by applying these inequalities to all elements of a partition. □
Corollary 8.5
Let \(\chi _{\text {sign}}^{H}\) be the signed hyperedge coloring number of Γ and let \(H_{1},\ldots ,H_{\chi _{\text {sign}}^{H}}\) be the corresponding coloring classes. Let also Γ_{j} := (V,H_{j}) for \(j\in \{1,\ldots ,\chi _{\text {sign}}^{H}\}\). For each p ≥ 1,
Proof
By the definition of signed hyperedge coloring number,
for each coloring class H_{j}. Together with Lemma 8.4, this proves the claim. □
Notes
For convenience, w.l.o.g. we normalize the eigenfunctions f_{k,p} of λ_{k}(Δ_{p}), i.e., we assume ∥f_{k,p}∥_{p} = 1.
References
Andreotti, E., Mulas, R.: Signless normalized Laplacian for hypergraphs. arXiv:2005.14484 (2020)
Bauer, F., Jost, J.: Bipartite and neighborhood graphs and the spectrum of the normalized graph Laplacian operator. Commun. Anal. Geom. 21, 787–845 (2013)
Binding, P.A., Rynne, B.P.: Variational and nonvariational eigenvalues of the pLaplacian. J. Differ. Equ. 244, 24–39 (2008)
Brian Davies, E., Gladwell, G.L., Leydold, J., Stadler, P.F.: Discrete nodal domain theorems. Linear Algebra Appl. 336, 51–60 (2001)
Bühler, T., Hein, M.: Spectral clustering based on the graph pLaplacian. In: Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada (2009)
Cepicka, J., Drábek, P., Girg, P.: Open problems related to the pLaplacian. Bol. Soc. Esp. Mat. Apl. 29, 13–34 (2004)
Chang, K.C.: The spectrum of the 1Laplace operator. Commun. Contemp. Math. 11, 865–894 (2009)
Chang, K.C.: Spectrum of the 1Laplacian and Cheeger’s constant on graphs. J. Graph Theory 81, 167–207 (2016)
Clarke, F.: Generalized gradients and applications. Trans. Amer. Math. Soc. 205, 247–262 (1975)
Clarke, F.H.: Optimization and Nonsmooth Analysis. SIAM, Philadelphia (1990)
Degiovanni, M., Marzocchi, M.: A critical point theory for nonsmooth functionals. Ann. Mat. Pura Appl. 167, 73–100 (1994)
Degiovanni, M., Marzocchi, M.: Limit of minimax values under Γconvergence. Electron. J. Differ. Equ. 2014, 266 (2014)
Degiovanni, M., Mazzoleni, D.: Optimization results for the higher eigenvalues of the pLaplacian associated with signchanging capacitary measures. J. Lond. Math. Soc. 104, 97–146 (2021)
Elmoataz, A., Toutain, M., Tenbrinck, D.: On the pLaplacian and \(\infty \)Laplacian on graphs with applications in image and data processing. SIAM J. Imaging Sci. 8, 2412–2451 (2015)
Gaur, D.R., Krishnamurti, R., Kohli, R.: The capacitated max kcut problem. Math. Program. Ser. A 115, 65–72 (2008)
Gilboa, G., Osher, S.: Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Simul. 6, 595–630 (2007)
Hein, M., Bühler, T.: An inverse power method for nonlinear eigenproblems with applications in 1spectral clustering and sparse PCA. NIPS, pp. 847–855 (2010)
Hein, M., Setzer, S., Jost, L., Rangapuram, S.S.: The total variation on hypergraphs  learning on hypergraphs revisited. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 2, pp 2427–2435 (2013)
Jin, Y., Jost, J., Wang, G.: A new nonlocal variational setting for image processing. Inverse Probl. Imaging 9, 415–430 (2015)
Jost, J., LiJost, X.: Calculus of Variations. Cambridge Studies in Advanced Mathematics, vol. 64. Cambridge University Press, Cambridge (1998)
Jost, J., Mulas, R.: Hypergraph Laplace operators for chemical reaction networks. Adv. Math. 351, 870–896 (2019)
Jost, J., Mulas, R.: Cheegerlike inequalities for the largest eigenvalue of the graph Laplace operator. J. Graph Theory 97, 408–425 (2021)
Jost, J., Mulas, R., Zhang, D.: pLaplace Operators for Oriented Hypergraphs (second part). In preparation
Kawohl, B., Fridman, V.: Isoperimetric estimates for the first eigenvalue of the pLaplace operator and the Cheeger constant. Comment. Math. Univ. Carolin. 44, 659–667 (2003)
Kawohl, B., Schuricht, F.: Dirichlet problems for the 1Laplace operator, including the eigenvalue problem. Commun. Contemp. Math. 9, 515–543 (2007)
Kawohl, B., Schuricht, F.: First eigenfunctions of the 1Laplacian are viscosity solutions. Commun. Pure Appl. Anal. 14, 329–339 (2015)
Littig, S., Schuricht, F.: Convergence of the eigenvalues of the pLaplace operator as p goes to 1. Calc. Var. Partial Differ. Equ. 49, 707–727 (2014)
Lucia, M., Schuricht, F.: Mountain pass solution for nonsmooth elliptic problems. Minimax Theory Appl. 5, 129–150 (2020)
Milbers, Z., Schuricht, F.: Existence of a sequence of eigensolutions for the 1Laplace operator. J. Lond. Math. Soc. 82, 74–88 (2010)
Milbers, Z., Schuricht, F.: Necessary condition for eigensolutions of the 1Laplace operator by means of inner variations. Math. Ann. 356, 147–177 (2013)
Mulas, R.: Sharp bounds for the largest eigenvalue. Math. Notes 109, 102–109 (2021)
Mulas, R., Kuehn, C., Jost, J.: Coupled dynamics on hypergraphs: Master stability of steady states and synchronization. Phys. Rev. E 101, 062313 (2020)
Mulas, R., Zhang, D.: Spectral theory of Laplace operators on oriented hypergraphs. Discrete Math. 344, 112372 (2021)
Parini, E.: An introduction to the Cheeger problem. Surv. Math. Appl. 6, 9–22 (2011)
Rangapuram, S., Mudrakarta, P.K., Hein, M.: Tight continuous relaxation of the balanced kcut problem. NeurIPS (2014)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)
Schuricht, F.: An alternative derivation of the eigenvalue equation for the 1Laplace operator. Arc. Math. 87, 572–577 (2006)
Shi, C.J.: A signed hypergraph model of the constrained via minimization problem. Microelectron. J 23, 533–542 (1992)
Takeuchi, H.: The spectrum of the pLaplacian and pharmonic morphisms on graphs. Illinois J. Math. 47, 939–955 (2003)
Tudisco, F., Hein, M.: A nodal domain theorem and a higherorder Cheeger inequality for the graph pLaplacian. J. Spectral Theory 8, 883–908 (2018)
Valtorta, D.: On the pLaplace operator on Riemannian manifolds. PhD thesis, università degli Studi di Milano (2014). arXiv:1212.3422v3 (2012)
Acknowledgements
We thank Friedemann Schuricht and two anonymous referees for valuable suggestions and references.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Dedicated to Bernd Sturmfels on the occasion of his 60th birthday.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jost, J., Mulas, R. & Zhang, D. pLaplace Operators for Oriented Hypergraphs. Vietnam J. Math. 50, 323–358 (2022). https://doi.org/10.1007/s10013021005254
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013021005254
Keywords
 Oriented hypergraphs
 Spectral theory
 pLaplacian
Mathematics Subject Classification (2010)
 05C65
 47J10
 47H04
 05C50
 47H05
 46F30