1 Introduction

The digital world has brought with it many different kinds of data of increasing size and complexity. Indeed, modern devices allow us to easily obtain images of higher resolution, as well as to collect data on internet searches, healthcare analytics, social networks, geographic information systems or business informatics. The study and treatment of these big data is of great interest and value. To this aim, weighted discrete graphs provide the most natural and flexible workspace in which to represent the data. For this purpose, a vertex represents a data point and each edge is weighted according to an appropriately chosen measure of “similarity” between the corresponding vertices. Historically, the main tools for the study of graphs came from combinatorial graph theory. However, following the implementation of the graph Laplacian in the development of spectral clustering in the seventies, the theory of partial differential equations on graphs has obtained important results in this field. This has prompted a big surge in the research of nonlocal partial differential equations. Moreover, interest has been further bolstered by the study of problems in image processing, by the analysis of the peridynamic formulation of the continuous mechanic and by the study of Markov jump processes among other problems. Some references on these topics are given along the survey, see also [11, 21, 27, 30, 31, 33, 39, 42, 53, 57,58,59].

In the last years and with these problems in mind, we have studied some gradient flows in the general framework of a metric random walk space, that is, a Polish metric space (Xd) together with a probability measure \(m_x\) assigned to each \(x\in X\) that encode the jumps of a Markov process. In particular, we have studied, in this framework, the heat flow, the total variational flow, and evolutions problems of Leray–Lions type with different types of nonhomogeneous boundary conditions. In doing so, we have been able to unify into a broad framework the study of these problems in weighted discrete graphs and other nonlocal models of interest. Specifically, together with the existence and uniqueness of solutions to the aforementioned problems, a wide variety of their properties have been studied (some of which are listed in the contents section), as well as the nonlocal diffusions operators involved in them. This survey is mainly based on the results that we have obtained in [46, 47] (see also the more recent work [55]) and [48]. Related to the above problems, we have also studied (see [49]) the \((BV,L^p)\)-decomposition, \(p=1\) and \(p=2\), of functions in metric random walk spaces.

Let us shortly describe the contents of the paper. To start with, in Sect. 2 we introduce the general framework of a metric random walk space and we give important examples.

Section 3 is devoted to the study of the heat flow. In our context, associated with the random walk \(m=(m_x)_{x\in X}\), the Laplace operator \(\Delta _m\) is defined as

$$\begin{aligned} \Delta _m f (x):= \int _X (f(y) - f(x)) dm_x(y). \end{aligned}$$

Assuming that there exists a measure \(\nu \) satisfying a reversibility condition with respect to the random walk, the operator \(- \Delta _m\) generates in \(L^2(X, \nu )\) a Markovian semigroup \((e^{t \Delta _m})_{t \ge 0}\) (Theorem 3.1) called the heat flow on the metric random walk space. Thanks to the generality of our framework the results that we obtain are applicable, for example, to the heat flow on graphs or on nonlocal models in \({\mathbb {R}}^N\) associated to a nonsingular kernel. Moreover, we introduce a stability property for the random walk, called m-connectedness, which allows us, for example, to characterise the infinite speed of propagation of the heat flow (Theorem 3.8) and the ergodicity of the invariant measure associated with the random walk. Furthermore, the behaviour of the semigroup \((e^{t \Delta _m})_{t \ge 0}\) as \(t \rightarrow \infty \) is of great importance in many applications; in this regard, and with the help of a Poincaré inequality, we obtain rates of convergence. Finally, we study the relation between this Poincaré inequality and the Bakry–Émery curvature condition.

In Sect. 4 we study the total variation flow. For this purpose, we introduce the 1-Laplacian operator associated with a metric random walk space, as well as the notions of perimeter and mean curvature for subsets of a metric random walk space. In doing so, we generalize results obtained in [44, 45] for the particular case of \({\mathbb {R}}^N\) with a nonsingular kernel as well as some results in graph theory. We then proceed to prove existence and uniqueness of solutions of the total variation flow in metric random walk spaces and to study its asymptotic behaviour with the help of some Poincaré type inequalities.

One motivation for the study of the 1-Laplacian operator comes from spectral clustering. Partitioning data into sensible groups is a fundamental problem in machine learning, computer science, statistics and science in general. In these fields, it is usual to face large amounts of empirical data, and getting a first impression of these data by identifying groups with similar properties is proved to be very useful. One of the most popular approaches to this problem is to find the best balanced cut of a graph representing the data, such as the Cheeger ratio cut [19]. Consider a finite weighted connected graph \(G =(V, E)\), where \(V = \{x_1, \ldots , x_n \}\) is the set of vertices (or nodes) and E the set of edges, which are weighted by a function \(w_{ji}= w_{ij} \ge 0\), \((i,j) \in E\). The degree of the vertex \(x_i\) is denoted by \(d_i:= \sum _{j=1}^n w_{ij}\), \(i=1,\ldots , n\). In this context, the Cheeger cut value of a partition \(\{ S, S^c\}\) (\(S^c:= V \setminus S\)) of V is defined as

$$\begin{aligned} {\mathcal {C}}(S):= \frac{\mathrm{Cut}(S,S^c)}{\min \{\mathrm{vol}(S), \mathrm{vol}(S^c)\}}, \end{aligned}$$

where

$$\begin{aligned} \mathrm{Cut}(A,B) = \sum _{i \in A, j \in B} w_{ij}, \end{aligned}$$

and \(\mathrm{vol}(S)\) is the volume of S, defined as \(\mathrm{vol}(S):= \sum _{i \in S} d_i\). Furthermore,

$$\begin{aligned} h(G) = \min _{S \subset V} {\mathcal {C}}(S) \end{aligned}$$

is called the Cheeger constant, and a partition \(\{ S, S^c\}\) of V is called a Cheeger cut of G if \(h(G)={\mathcal {C}}(S)\). Unfortunately, the Cheeger minimization problem of computing h(G) is NP-hard [36, 56]. However, it turns out that h(G) can be approximated by the second eigenvalue \(\lambda _2\) of the graph Laplacian thanks to the following Cheeger inequality [20]:

$$\begin{aligned} \frac{\lambda _2}{2} \le h(G) \le \sqrt{2\lambda _2}. \end{aligned}$$
(1.1)

This motivates the spectral clustering method [43], which, in its simplest form, thresholds the second eigenvalue of the graph Laplacian to get an approximation to the Cheeger constant and, moreover, to a Cheeger cut. In order to achieve a better approximation than the one provided by the classical spectral clustering method, a spectral clustering based on the graph p-Laplacian was developed in [14], where it is showed that the second eigenvalue of the graph p-Laplacian tends to the Cheeger constant h(G) as \(p \rightarrow 1^+\). In [56] the idea was further developed by directly considering the variational characterization of the Cheeger constant h(G)

$$\begin{aligned} h(G) = \min _{u \in L^1} \frac{ \vert u \vert _{TV}}{\Vert u - \mathrm{median}(u)) \Vert _1}, \end{aligned}$$
(1.2)

where

$$\begin{aligned} \vert u \vert _{TV} := \frac{1}{2} \sum _{i,j=1}^n w_{ij} \vert u(x_i) - u(x_j) \vert . \end{aligned}$$

The subdifferential of the energy functional \(\vert \cdot \vert _{TV}\) is the 1-Laplacian in graphs \(\Delta _1\). Using the nonlinear eigenvalue problem \(\lambda \, \mathrm{sign}(u) \in \Delta _1 u\), the theory of 1-Spectral Clustering is developed in [16,17,18, 36]. In [46], we obtained a generalization, in the framework of metric random walk spaces, of the Cheeger inequality (1.1) and of the variational characterization of the Cheeger constant (1.2).

Moreover, in Sect. 4 we introduce the concepts of Cheeger and calibrable sets in metric random walk spaces and characterise the calibrability of a set by using the 1-Laplacian operator. Furthermore, we study the eigenvalue problem of the 1-Laplacian and relate it to the optimal Cheeger cut problem. These results apply, in particular, to locally finite weighted connected discrete graphs, complementing the results given in [16,17,18, 36].

Finally, in Sect. 5 we study p-Laplacian type evolution problems like the one given in the following reference model:

$$\begin{aligned} u_t(t,x) = \int _{\Omega \cup \partial _m\Omega } \vert u(y)-u(x)\vert ^{p-2}(u(y) - u(x)) dm_x(y), \quad x\in \Omega ,\ 0<t<T, \end{aligned}$$
(1.3)

with

$$\begin{aligned} \hbox { nonhomogeneous Neumann boundary conditions,} \end{aligned}$$

where \(\Omega \subset X\) and \(\partial _m\Omega =\{ x\in X\setminus \Omega : m_x(\Omega )>0 \}\) is the m-boundary of \(\Omega \). This reference model can be regarded as the nonlocal counterpart to the classical evolution problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\hbox {div}(|\nabla u|^{p-2}\nabla u),&{}\quad x\in U,\ 0<t<T,\\ -|\nabla u|^{ p-2}\nabla u\cdot \eta =\varphi ,&{} \quad x\in \partial U,\ 0<t<T, \end{array}\right. \end{aligned}$$

where U is a bounded smooth domain in \({\mathbb {R}}^n\), and \(\eta \) is the outer normal vector to \(\partial U\).

Nonlocal diffusion problems of p-Laplacian type with homogeneous Neumann boundary conditions have been studied in nonlocal models in \({\mathbb {R}}^N\) associated to a non-singular kernel (see, for example, [4, 5]) and also in weighted discrete graphs (see, for example, the work of Hafiene et al. [35]) with the following formulation:

$$\begin{aligned} u_t(t,x) = \int _{\Omega } \vert u(y)-u(x)\vert ^{p-2}(u(y) - u(x))dm_x(y), \quad x\in \Omega ,\ 0<t<T. \end{aligned}$$
(1.4)

However, the nonhomogeneous boundary conditions have only been studied for the liner case. For example, Cortazar et al. in [22] work on a perturbed version of the linear case of Problem (1.4) (\(p=2\)) in \({\mathbb {R}}^N\) with a non-singular kernel. Moreover, in [34], Gunzburger and Lehoucq develop a nonlocal vector calculus with applications to linear nonlocal problems in which the nonlocal Neumann boundary condition considered can be rewritten as

$$\begin{aligned} -\int _{\Omega \cup \partial _m \Omega } (u(y) - u(x)) dm_x(y) =\varphi (x), \quad x \in \partial _m\Omega . \end{aligned}$$
(1.5)

Another interesting approach is proposed by Dipierro et al. in [25] for the particular case of the fractional Laplacian diffusion (although the idea can be used for other kernels) with the following Neumann boundary condition, that we rewrite in the context of metric random walk spaces,

$$\begin{aligned} -\int _\Omega (u(x) - u(y)) dm_x(y)=\varphi (x), \quad x \in {\partial _m\Omega }; \end{aligned}$$
(1.6)

or, alternatively, if one prefers a normalized boundary condition with respect to the underlying probability measure induced by the jump process under consideration,

$$\begin{aligned} -\frac{1}{m_x(\Omega )}\int _\Omega (u(x) - u(y)) dm_x(y)=\varphi (x), \quad x \in {\partial _m\Omega }. \end{aligned}$$

Therefore, in this latter case and as remarked in [25], when a particle of mass u(x) exits \(\Omega \) towards a point \(x\in \partial _m\Omega \), a mass \(u(x)-\varphi (x)\) immediately comes back into \(\Omega \) according to the distribution \(\frac{1}{m_x(\Omega )}m_x\):

$$\begin{aligned} \frac{1}{m_x(\Omega )}\int _\Omega u(y)dm_x(y)=u(x)-\varphi (x),\quad x\in \partial _m \Omega . \end{aligned}$$

A similar probabilistic interpretation can be given for the Neumann boundary condition (1.5) but involving all of \(\Omega \cup \partial _m\Omega \). Anyhow, observe that the formulations (1.5) and (1.6) have an important difference in their definition regarding the domain of integration.

In our work we study Problem (1.3) with the nonhomogeneous Neumann boundary conditions of Gunzburger–Lehoucq type and also of Dipierro–Ros-Oton–Valdinoci type.

2 Metric random walk spaces

Let (Xd) be a Polish metric space equipped with its Borel \(\sigma \)-algebra. Every measure considered in this survey is defined on this \(\sigma \)-algebra.

A random walk m on X is a family of probability measures \((m_x)_{x\in X}\) on X depending measurably on x, i.e., for any Borel set A of X and any Borel set B of \({\mathbb {R}}\), the set \(\{ x \in X \ : \ m_x(A) \in B \}\) is Borel. When dealing with optimal transport problems we will further assume that each measure \(m_x\) has finite first moment (see [52]).

Definition 2.1

A metric random walk space [Xdm] is a Polish metric space (Xd) together with a random walk m on X.

For a metric random walk space [Xdm], a Radon measure \(\nu \) on X is said to be invariant for the random walk \(m=(m_x)\) if

$$\begin{aligned} d\nu (x)=\int _{y\in X}d\nu (y)dm_y(x), \end{aligned}$$

that is, for any \(\nu \)-measurable set A, it holds that A is \(m_x\)-measurable for \(\nu \)-almost all \(x\in X\), \( x\mapsto m_x(A)\) is \(\nu \)-measurable, and

$$\begin{aligned} \nu (A)=\int _X m_x(A)d\nu (x). \end{aligned}$$

Consequently, if \(\nu \) is an invariant measure with respect to m and \(f \in L^1(X, \nu )\), it holds that \(f \in L^1(X, m_x)\) for \(\nu \)-a.e. \(x \in X\), \( x\mapsto \int _X f(y) d{m_x}(y)\) is \(\nu \)-measurable, and

$$\begin{aligned} \int _X f(x) d\nu (x) = \int _X \left( \int _X f(y) d{m_x}(y) \right) d\nu (x). \end{aligned}$$

The measure \(\nu \) is said to be reversible for m if, moreover, the following detailed balance condition holds:

$$\begin{aligned} dm_x(y)d\nu (x) = dm_y(x)d\nu (y), \end{aligned}$$
(2.1)

i.e., for all bounded Borel functions f defined on \(X\times X\)

$$\begin{aligned} \int _X \int _X f(x,y) dm_x(y) d\nu (x) =\int _X\int _X f(y,x) dm_x(y) d\nu (x) . \end{aligned}$$

Note that the reversibility condition is stronger than the invariance condition. Of course, if \(\nu (X)<+\infty \) then \(\nu \) can, and will, be normalized to a probability measure.

As mentioned by Ollivier in [52], a geometer may think of \(m_x\) as a replacement for the notion of ball around x, while in probabilistic terms we can rather think of this data as defining a Markov chain whose transition probability from x to y in n steps is

$$\begin{aligned} dm_x^{*n}(y):= \int _{z \in X} dm_z(y)dm_x^{*(n-1)}(z), \end{aligned}$$

where \(m_x^{*0} = \delta _x\). Of course, \([X, d, m^{*n}]\) is also a metric random walk space. Moreover, if \(\nu \) is invariant (reversible) for m, then \(\nu \) is also invariant (reversible) for \(m^{*n}\).

We now give some well-known examples of metric random walk spaces which will aid in illustrating the generality of this abstract setting. In particular, Markov chains serve as paradigmatic examples that capture many of the properties of this general setting that we will encounter during this study.

Example 2.2

(1) Consider \(({\mathbb {R}}^N, d, {\mathcal {L}}^N)\), with d the Euclidean distance and \({\mathcal {L}}^N\) the Lebesgue measure. For simplicity we will write dx instead of \(d{\mathcal {L}}^N(x)\). Let \(J:{\mathbb {R}}^N\rightarrow [0,+\infty [\) be a measurable, nonnegative and radially symmetric function verifying \(\int _{{\mathbb {R}}^N}J(x)dx=1\). In \(({\mathbb {R}}^N, d, {\mathcal {L}}^N)\) we have the following random walk, starting at x,

$$\begin{aligned} m^J_x(A) := \int _A J(x - y) dy \quad \hbox {for every Borel set } A \subset {\mathbb {R}}^N . \end{aligned}$$

Applying Fubini’s Theorem it is easy to see that the Lebesgue measure \({\mathcal {L}}^N\) is a reversible (thus invariant) measure for this random walk.

Observe that, if we assume that in \({\mathbb {R}}^N\) we have an homogeneous population and \(J(x-y)\) is thought of as the probability distribution of jumping from location x to location y, then, for a Borel set A in \({\mathbb {R}}^N\), \(m^J_x(A)\) is measuring how many individuals who started at x are arriving at A in one jump. The same ideas are applicable to the countable spaces given in the following two examples.

(2) Let \(K: X \times X \rightarrow {\mathbb {R}}\) be a Markov kernel on a countable space X, i.e.,

$$\begin{aligned} K(x,y) \ge 0 \quad \forall x,y \in X, \qquad \sum _{y\in X} K(x,y) = 1 \quad \forall x \in X. \end{aligned}$$

Then, for

$$\begin{aligned} m^K_x(A):= \sum _{y \in A} K(x,y), \quad A\subset X, \end{aligned}$$

\([X, d, m^K]\) is a metric random walk space for any metric d on X.

Moreover, in Markov chain theory terminology, a measure \(\pi \) on X satisfying

$$\begin{aligned} \sum _{x \in X} \pi (x) = 1 \qquad \hbox {and} \qquad \pi (y) = \sum _{x \in X} \pi (x) K(x,y) \quad \forall y \in X, \end{aligned}$$

is called a stationary probability measure (or steady state) on X. This is equivalent to the definition of invariant probability measure for the metric random walk space \([X, d, m^K]\). In general, the existence of such a stationary probability measure on X is not guaranteed (see, for instance, [51, Example 1.7.11]). However, for irreducible and positive recurrent Markov chains (see, for example, [37] or [51]) there exists a unique stationary probability measure.

Furthermore, a stationary probability measure \(\pi \) is said to be reversible for K if the following detailed balance equation holds:

$$\begin{aligned} K(x,y) \pi (x) = K(y,x) \pi (y) \quad \hbox {for } x, y \in X. \end{aligned}$$

By Tonelli’s Theorem for series, this balance condition is equivalent to the one given in (2.1) for \(\nu =\pi \):

$$\begin{aligned} dm^K_x(y)d\pi (x) = dm^K_y(x)d\pi (y). \end{aligned}$$

(3) Consider a locally finite weighted discrete graph \(G = (V(G), E(G))\), where each edge \((x,y) \in E(G)\) (we will write \(x\sim y\) if \((x,y) \in E(G)\)) has a positive weight \(w_{xy} = w_{yx}\) assigned. Suppose further that \(w_{xy} = 0\) if \((x,y) \not \in E(G)\).

A finite sequence \(\{ x_k \}_{k=0}^n\) of vertices of the graph is called a path if \(x_k \sim x_{k+1}\) for all \(k = 0, 1,\ldots , n-1\). The length of a path \(\{ x_k \}_{k=0}^n\) is defined as the number n of edges in the path. Then, \(G = (V(G), E(G))\) is said to be connected if, for any two vertices \(x, y \in V\), there is a path connecting x and y, that is, a path \(\{ x_k \}_{k=0}^n\) such that \(x_0 = x\) and \(x_n = y\). Finally, if \(G = (V(G), E(G))\) is connected, define the graph distance \(d_G(x,y)\) between any two distinct vertices xy as the minimum of the lengths of the paths connecting x and y. Note that this metric is independent of the weights. We will always assume that the graphs we work with are connected.

For \(x \in V(G)\) we define the weighted degree at the vertex x as

$$\begin{aligned} d_x:= \sum _{y\sim x} w_{xy} = \sum _{y\in V(G)} w_{xy}, \end{aligned}$$

and the neighbourhood of x as \(N_G(x) := \{ y \in V(G) \, : \, x\sim y\}\). Note that, by definition of locally finite graph, the sets \(N_G(x)\) are finite. When \(w_{xy}=1\) for every \(x\sim y\), \(d_x\) coincides with the degree of the vertex x in a graph, that is, the number of edges containing vertex x.

For each \(x \in V(G)\) we define the following probability measure

$$\begin{aligned} m^G_x:= \frac{1}{d_x}\sum _{y \sim x} w_{xy}\,\delta _y. \end{aligned}$$

We have that \([V(G), d_G, m^G]\) is a metric random walk space and it is not difficult to see that the measure \(\nu _G\) defined as

$$\begin{aligned} \nu _G(A):= \sum _{x \in A} d_x, \quad A \subset V(G), \end{aligned}$$

is a reversible measure for this random walk.

Given a locally finite weighted discrete graph \(G = (V(G), E(G))\), there is a natural definition of a Markov chain on the vertices. We define the Markov kernel \(K_G: V(G)\times V(G) \rightarrow {\mathbb {R}}\) as

$$\begin{aligned} K_G(x,y):= \frac{1}{d_x} w_{xy}. \end{aligned}$$

We have that \(m^G\) and \(m^{K_G}\) define the same random walk. If \(\nu _G(V(G))\) is finite, the unique reversible probability measure is given by

$$\begin{aligned} \pi _G(x):= \frac{1}{\nu _G(V(G))} \sum _{z \in V(G)} w_{xz}. \end{aligned}$$

(4) From a metric measure space \((X,d, \mu )\) we can obtain a metric random walk space, the so called \(\epsilon \)-step random walk associated to \(\mu \), as follows. Assume that balls in X have finite measure and that \(\mathrm{Supp}(\mu ) = X\). Given \(\epsilon > 0\), the \(\epsilon \)-step random walk on X starting at \(x\in X\), consists in randomly jumping in the ball of radius \(\epsilon \) centered at x with probability proportional to \(\mu \); namely

If balls of the same radius have the same volume, then \(\mu \) is a reversible measure for the metric random walk space \([X, d, m^{\mu ,\epsilon }]\).

(5) Given a metric random walk space [Xdm] with reversible measure \(\nu \), and given a \(\nu \)-measurable set \(\Omega \subset X\) with \(\nu (\Omega ) > 0\), if we define, for \(x\in \Omega \),

$$\begin{aligned} m^{\Omega }_x(A):=\int _A d m_x(y)+\left( \int _{X\setminus \Omega }d m_x(y)\right) \delta _x(A) \quad \hbox {for every Borel set } A \subset \Omega , \end{aligned}$$

we have that \([\Omega ,d,m^{\Omega }]\) is a metric random walk space and it easy to see that is reversible for \(m^{\Omega }\). In particular, if \(\Omega \) is a closed and bounded subset of \({\mathbb {R}}^N\), we obtain the metric random walk space \([\Omega , d, m^{J,\Omega }]\), where \(m^{J,\Omega } = (m^J)^{\Omega }\), that is

$$\begin{aligned} m^{J,\Omega }_x(A):=\int _A J(x-y)dy+\left( \int _{{\mathbb {R}}^n\setminus \Omega }J(x-z)dz\right) d\delta _x \quad \hbox {for every Borel set } A \subset \Omega . \end{aligned}$$

See Example 3.5 to understand how we may take advantage of this random walk.

From this point onwards, when dealing with a metric random walk space [Xdm], we will assume that there exists an invariant and reversible measure for the random walk, which we will always denote by \(\nu \). Then, for simplicity, we will denote the metric random walk space by \([X,d,m,\nu ]\). Furthermore, we assume that the measure space \((X,\nu )\) is \(\sigma \)-finite.

3 The heat flow on metric random walk spaces

3.1 The heat flow

Let \([X,d,m,\nu ]\) be a metric random walk. For a function \(u : X \rightarrow {\mathbb {R}}\) we define its nonlocal gradient \(\nabla u: X \times X \rightarrow {\mathbb {R}}\) as

$$\begin{aligned} \nabla u (x,y)= u(y) - u(x) \quad \forall \, x,y \in X, \end{aligned}$$

and for a function \({\mathbf{z}}: X \times X \rightarrow {\mathbb {R}}\), its m-divergence \(\mathrm{div}_m {\mathbf{z}}: X \rightarrow {\mathbb {R}}\) is defined as

$$\begin{aligned} (\mathrm{div}_m {\mathbf{z}})(x):= \frac{1}{2} \int _{X} ({\mathbf{z}}(x,y) - {\mathbf{z}}(y,x)) dm_x(y). \end{aligned}$$

The averaging operator on [Xdm] (see, for example, [52]) is defined as

$$\begin{aligned} M_m f(x):= \int _X f(y) dm_x(y), \end{aligned}$$

when this expression has sense, and the Laplace operator as \(\Delta _m= M_m - I\), i.e.,

$$\begin{aligned} \Delta _m f(x)= \int _X f(y) dm_x(y) - f(x) = \int _X (f(y) - f(x)) dm_x(y). \end{aligned}$$

Note that

$$\begin{aligned} \Delta _m f (x) = \mathrm{div}_m (\nabla f)(x). \end{aligned}$$

The invariance of \(\nu \) is equivalent to the following property:

$$\begin{aligned} \int _X \Delta _m f(x) d\nu (x) = 0 \quad \forall \, f\in L^1(X,\nu ). \end{aligned}$$
(3.1)

In the case of the metric random walk space associated to a locally finite weighted discrete graph G (see Example 2.2), the above operator is the graph Laplacian studied by many authors (see e.g. [6, 9, 26, 38]).

If the invariant measure \(\nu \) is reversible, the following integration by parts formula is straightforward:

$$\begin{aligned} \int _X f(x) \Delta _m g (x) d\nu (x) = -\frac{1}{2} \int _{X \times X} (f(y)-f(x)) (g(y) - g(x)) dm_x(y) d\nu (x) \end{aligned}$$
(3.2)

for \(f,g \in L^2(X, \nu )\cap L^1(X, \nu )\).

In \(L^2(X, \nu )\) we consider the symmetric form given by

$$\begin{aligned} {\mathcal {E}}_m(f,g) = - \int _X f(x) \Delta _mg (x) d\nu (x) = \frac{1}{2} \int _{X \times X}\nabla f(x,y)\nabla g(x,y) d{m_x}(y)d\nu (x), \end{aligned}$$

with domain for both variables \(D({\mathcal {E}}_m) = L^2(X, \nu )\cap L^1(X, \nu )\), which is a linear and dense subspace of \(L^2(X,\nu )\). Recalling the definition of the generalized product \(\nu \otimes m_x\) (see, for instance, [2, Definition 2.2.7]), we can write

$$\begin{aligned} {\mathcal {E}}_m(f,g) = \frac{1}{2} \int _{X \times X}\nabla f(x,y)\nabla g(x,y) d(\nu \otimes m_x)(x,y). \end{aligned}$$

Theorem 3.1

[46] Let \([X,d,m,\nu ]\) be a metric random walk space. Then,  \(- \Delta _m\) is a non-negative self-adjoint operator in \(L^2(X, \nu )\) with associated closed symmetric form \({\mathcal {E}}_m,\) which is,  moreover,  a Markovian form.

By Theorem 3.1, as a consequence of the theory developed in [28, Chapter 1], we have that if \((T^m_t)_{t \ge 0}\) is the strongly continuous semigroup associated with \({\mathcal {E}}_m\), then \((T^m_t)_{t \ge 0}\) is a positivity preserving (i.e., \(T^m_t f \ge 0\) if \(f \ge 0\)) Markovian semigroup (i.e., \(0 \le T^m_t f \le 1\) \(\nu \)-a.e. whenever \(f \in L^2(X, \nu )\), \(0 \le f \le 1\) \(\nu \)-a.e.). Moreover, \(\Delta _m\) is the infinitesimal generator of \((T^m_t)_{t \ge 0}\), that is

$$\begin{aligned} \Delta _m f = \lim _{t\downarrow 0} \frac{T^m_t f - f}{t}, \quad \forall \, f \in D(\Delta _m). \end{aligned}$$

Definition 3.2

We denote \(e^{t\Delta _m}:= T^m_t\) and say that \(\{e^{t\Delta _m} \, : \, t \ge 0 \}\) is the heat flow on the metric random walk space \([X,d,m,\nu ]\).

For every \(u_0 \in L^2(X, \nu )\), \(u(t):= e^{t\Delta _m}u_0 \) is the unique solution of the heat equation

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{du}{dt}(t) = \Delta _m u(t) \quad \hbox {for every } t>0, \\ u(0) = u_0, \end{array}\right. \end{aligned}$$
(3.3)

in the sense that \(u \in C([0,+\infty ): L^2(X, \nu )) \cap C^1((0,+\infty ): L^2(X, \nu ))\) and verifies (3.3), or equivalently,

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{du}{dt}(t,x) = \int _{X} (u(t)(y)- u(t)(x)) dm_x(y) \quad \hbox {for every } t>0 \hbox { and }\nu \hbox {-a.e. } x\in X, \\ u(0) = u_0. \end{array}\right. \end{aligned}$$

By the Hille–Yosida exponential formula we have that

$$\begin{aligned} e^{t\Delta _m}u_0 = \lim _{n \rightarrow +\infty } \left[ \left( I - \frac{t}{n} \Delta _m \right) ^{-1} \right] ^n u_0. \end{aligned}$$

As a consequence of (3.1), if \(\nu (X) < +\infty \), we have that the semigroup \((e^{t\Delta _m})_{t \ge 0}\) conserves the mass. In fact,

$$\begin{aligned} \frac{d}{dt} \int _X e^{t\Delta _m}u_0(x) d\nu (x) = \int _X \Delta _m u_0(x) d\nu (x) = 0, \end{aligned}$$

and, therefore,

$$\begin{aligned} \int _X e^{t\Delta _m}u_0(x) d\nu (x) = \int _X u_0(x) d\nu (x). \end{aligned}$$

Associated with \({\mathcal {E}}_m\) we define the energy functional

$$\begin{aligned} {\mathcal {H}}_m(f) := {\mathcal {E}}_m(f,f), \end{aligned}$$

i.e., \({\mathcal {H}}_m : L^2(X, \nu ) \rightarrow [0, + \infty ]\) with

$$\begin{aligned} {\mathcal {H}}_m(f)= \left\{ \begin{array}{ll} \frac{1}{2} \int _{X \times X} (f(x) - f(y))^2 dm_x(y) d\nu (x) &{}\quad \hbox {if }f\in L^2(X, \nu ) \cap L^1(X, \nu ). \\ + \infty , &{}\quad \hbox {else}. \end{array}\right. \end{aligned}$$

We denote

$$\begin{aligned} D({\mathcal {H}}_m)=L^2(X, \nu ) \cap L^1(X, \nu ). \end{aligned}$$

Note that for \(f\in D({\mathcal {H}}_m)\), we have

$$\begin{aligned} {\mathcal {H}}_m(f) = - \int _X f(x) \Delta _m f (x) d\nu (x). \end{aligned}$$

Remark 3.3

The functional \({\mathcal {H}}_m\) is convex, closed and lower semi-continuous in \(L^2(X, \nu )\), and it is not difficult to see that \(\partial {\mathcal {H}}_m = - \Delta _m\). Consequently, \(- \Delta _m\) is a maximal monotone operator in \(L^2(X, \nu )\) (see [13]). We can also consider the heat flow in \(L^1(X,\nu )\). Indeed, if we define in \(L^1(X,\nu )\) the operator A as \(Au = v \iff v(x) = - \Delta _mu(x)\) for all \(x \in X\), then A is an m-completely accretive operator in \(L^1(X,\nu )\) [10]. Therefore, A generates a \(C_0\)-semigroup \((S(t))_{t \ge 0}\) in \(L^1(X,\nu )\) (see [10, 23]) such that \(S(t)f = e^{t\Delta _m} f\) for all \(f \in L^1(X,\nu ) \cap L^2(X,\nu )\), verifying

$$\begin{aligned} \Vert S(t)u_0 \Vert _{L^p(X, \nu )} \le \Vert u_0 \Vert _{L^p(X, \nu )} \quad \forall u_0 \in L^p(X, \nu ) \cap L^1(X, \nu ), \ 1 \le p \le +\infty . \end{aligned}$$

In the case that \(\nu (X) < \infty \), we have that S(t) is an extension to \(L^1(X, \nu )\) of the heat flow \(e^{t \Delta _m}\) in \(L^2(X, \nu )\), that we will denote equally.

Theorem 3.4

[46] a Let \([X, d, m,\nu ]\) be a metric random walk space. For \(u_0\in L^2(X, \nu )\cap L^1(X, \nu ),\)

$$\begin{aligned} e^{t\Delta _{m}} u_0(x) = e^{-t}\sum _{n=0}^{\infty }\int _{X} u_0(y)dm_x^{*n}(y)\frac{t^n}{n!}, \end{aligned}$$

where \(\int _{X} u_0(y)dm_x^{*0}(y)=u_0(x)\).

In particular,  for \(D \subset X\) with \(\nu (D) < +\infty ,\) we have

where .

Example 3.5

Given a metric random walk space \([X,d,m,\nu ]\) and a Borel set \(\Omega \subset X\), we have that \(u(t):= e^{t\Delta _{m^{\Omega }}}u_0\) is the solution of

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{du}{dt}(t)(x) = \int _{\Omega } (u(t)(y)- u(t)(x)) dm_x(y) \quad \hbox {in}\ (0, +\infty )\times \Omega , \\ u(0) = u_0, \end{array}\right. \end{aligned}$$

which is an homogeneous Neumann problem for the m-heat equation. See [5] for a comprehensive study of this problem in the case \(m=m^{J,\Omega }\). In Sect. 5 we will consider other types of Neumann problems.

3.2 Infinite speed of propagation and ergodicity

In this section we study the infinite speed of propagation of the heat flow \((e^{t\Delta _{m}})_{t \ge 0}\), that is, we study the conditions under which

$$\begin{aligned} e^{t\Delta _{m}} u_0> 0 \quad \hbox {for all} \ t > 0 \ \hbox {whenever} \ 0 \le u_0 \in L^2(X, \nu ),\ u_0\not \equiv 0. \end{aligned}$$

We will see that this property is equivalent to a connectedness property of the space, to the ergodicity of the m-Laplacian \(\Delta _{m}\), and to the ergodicity of the measure \(\nu \).

Let [Xdm] be a metric random walk space with invariant measure \(\nu \). For a \(\nu \) measurable set D, we set

$$\begin{aligned} N^m_D =\{x\in X \,:\, m_x^{*n}(D)=0\ \ \forall n\in {\mathbb {N}}\}. \end{aligned}$$

Definition 3.6

A metric random walk space \([X, d, m,\nu ]\) is said to be random-walk-connected or m-connected if for any \(D \subset X\) with \(0<\nu (D)\) we have that \(\nu (N_D^m)=0.\)

This is equivalent to requiring that, for every Borel set \(D\subset X\) with \(\nu (D)>0\) and \(\nu \)-a.e. \(x\in X\),

$$\begin{aligned} \sum _{n=1}^{\infty }m_x^{*n}(D)>0. \end{aligned}$$

For locally finite weighted connected graphs we have the following result.

Theorem 3.7

[46] Let \([V(G), d_G, (m^G_x),\nu _G]\) be the metric random walk space associated to a locally finite weighted connected graph \(G = (V(G), E(G))\). Then,  \([V(G), d_G, (m^G_x),\nu _G]\) is m-connected.

The next result establishes a relation between the m-connectedness of a metric random walk space and the infinite speed of propagation of the heat flow.

Theorem 3.8

[46] \([X,d,m,\nu ]\) is m-connected if,  and only if,  for any non-\(\nu \)-null \(0 \le u_0 \in L^2(X, \nu ),\) we have \(e^{t\Delta _{m}} u_0> 0\) \(\nu \)-a.e.,  for all \(t > 0.\)

We now relate the m-connectedness notion with other known concepts in the literature. Let us begin with the concept of ergodicity of the invariant measure (see, for example, [37]).

Definition 3.9

Let \([X, d, m,\nu ]\) be a metric random walk space with \(\nu \) a probability measure.

  1. (i)

    A Borel set \(B \subset X\) is said to be invariant with respect to the random walk m if \(m_x(B) = 1\) whenever x is in B.

  2. (ii)

    The invariant probability measure \(\nu \) is said to be ergodic if \(\nu (B) = 0\) or \(\nu (B) = 1\) for every invariant set B with respect to the random walk m.

Theorem 3.10

[46] Let \([X, d, m,\nu ]\) be a metric random walk space with \(\nu \) a probability measure. Then,  \([X, d, m,\nu ]\) is m-connected if,  and only if,  \(\nu \) is ergodic.

Following Bakry et al. [8], we give the following definition.

Definition 3.11

Let \([X, d, m,\nu ]\) be a metric random walk space. We say that \(\Delta _m\) is ergodic if

$$\begin{aligned} \Delta _m u = 0, \ u\in L^1(X,\nu ) \ \Rightarrow \ u\hbox { is a constant }\nu \hbox {-a.e. (this constant is }0 \hbox { if }\nu \hbox { is not finite)}. \end{aligned}$$

This concept is also equivalent to the m-connectedness of the metric random walk space:

Theorem 3.12

[46] Let \([X, d, m,\nu ]\) be a metric random walk space with \(\nu \) a probability measure. Then,  \([X, d, m,\nu ]\) is m-connected if,  and only if,  \(\Delta _m\) is ergodic.

Consequently, m-connectedness is not a new concept; nevertheless, our aim with its introduction is to regard it as a kind of intrinsic geometric property of the metric random walk space. At the beginning of Sect. 4.1 we give another characterization of m-connectedness which justifies the choice of this terminology.

3.3 Functional inequalities

Suppose that \(\nu \) is a probability measure. We denote the mean value of \(f \in L^1(X, \nu )\), or the expected value of f, by

$$\begin{aligned} \nu (f)= {\mathbb {E}}_\nu (f)=\int _X f(x) d\nu (x). \end{aligned}$$

For \(f \in L^2(X, \nu )\), we denote the variance of f by

$$\begin{aligned} \mathrm{Var}_\nu (f):= \int _X (f(x) - \nu (f))^2 d\nu (x) = \frac{1}{2} \int _{X \times X} (f(x) - f(y))^2 d\nu (y) d\nu (x). \end{aligned}$$

Definition 3.13

The spectral gap of \(-\Delta _m\) is defined as

$$\begin{aligned} \mathrm{gap}(-\Delta _m) = \inf \left\{ \frac{{\mathcal {H}}_m(f)}{\mathrm{Var}_\nu (f) } \ : \ f \in D({\mathcal {H}}_m), \ \mathrm{Var}_\nu (f) \not = 0 \right\} , \end{aligned}$$

or, equivalently,

$$\begin{aligned} \mathrm{gap}(-\Delta _m) = \inf \left\{ \frac{{\mathcal {H}}_m(f)}{\Vert f \Vert ^2_2 } \ : \ f \in D({\mathcal {H}}_m), \ \Vert f \Vert _2 \not = 0, \ \nu (f) = 0 \right\} . \end{aligned}$$

Definition 3.14

We say that \([X,d,m,\nu ]\) satisfies a Poincaré inequality if there exists \(\lambda >0\) such that

$$\begin{aligned} \lambda \mathrm{Var}_\nu (f) \le {\mathcal {H}}_m(f) \quad \hbox {for all} \ f \in L^2(X,\nu ), \end{aligned}$$
(3.4)

or, equivalently,

$$\begin{aligned} \lambda \Vert f\Vert _{L^2(X, \nu )}^2\le {\mathcal {H}}_m(f)\quad \hbox {for all} \ f \in L^2(X,\nu ) \hbox { with }\nu (f)=0. \end{aligned}$$

Note that when \(\mathrm{gap}(-\Delta _m) >0\), \([X,d,m,\nu ]\) satisfies a Poincaré inequality with \(\lambda =\mathrm{gap}(-\Delta _m)\):

$$\begin{aligned} \mathrm{gap}(-\Delta _m)\mathrm{Var}_\nu (f) \le {\mathcal {H}}_m(f) \quad \hbox {for all} \ f \in L^2(X,\nu ), \end{aligned}$$

where \(\mathrm{gap}(-\Delta _m)\) is the best constant in the Poincaré inequality.

With such an inequality at hand and with a similar proof to the one done in the continuous setting (see, for instance, [8]), one can prove that if \(\mathrm{gap}(-\Delta _m) >0\) then \(e^{t\Delta _{m}} u_0\) converges to \(\nu (u_0)\) with exponential rate \(\lambda =\mathrm{gap}(-\Delta _m)\). In fact, we have:

Theorem 3.15

The following statements are equivalent : 

  1. (i)

    There exists \(\lambda >0\) such that

    $$\begin{aligned} \lambda \mathrm{Var}_\nu (f) \le {\mathcal {H}}_m(f) \quad \hbox {for all} \ f \in L^2(X,\nu ). \end{aligned}$$
  2. (ii)

    For every \(f \in L^2(X,\nu )\)

    $$\begin{aligned} \Vert e^{t\Delta _m} f - \nu (f) \Vert _{L^2(X, \nu )} \le e^{- \lambda t} \Vert f - \nu (f) \Vert _{L^2(X, \nu )} \quad \hbox {for all} \ t \ge 0. \end{aligned}$$

We finish this subsection by relating Poincaré inequalities with the Bakry–Émery curvature-dimension condition for the random walk. Observe that, since \({\mathcal {E}}_m\) admits a Carré du champ \(\Gamma \) (see [8]) defined by

$$\begin{aligned} \Gamma (f,g)(x)&= \frac{1}{2} \Big (\Delta _m(fg)(x) - f(x)\Delta _mg(x) - g(x) \Delta _mf (x) \Big ) \\&\quad \hbox {for all } x\in X \hbox { and } f,g \in L^2(X, \nu ), \end{aligned}$$

we can study the Bakry–Émery curvature-dimension condition in this context. Furthermore, we will address its relation with the spectral gap.

According to Bakry and Émery [7], we define the Ricci curvature operator \(\Gamma _2\) by iterating \(\Gamma \):

$$\begin{aligned} \Gamma _2(f,g) := \frac{1}{2} \Big ( \Delta _m \Gamma (f,g) - \Gamma (f,\Delta _m g)- \Gamma ( \Delta _m f,g) \Big ). \end{aligned}$$

This is well defined for \(f,g\in L^2(X, \nu )\). Moreover, we write, for \(f\in L^2(X, \nu )\),

$$\begin{aligned} \Gamma (f):=\Gamma (f,f)=\frac{1}{2} \Delta _m(f^2)-f\Delta _mf \end{aligned}$$

and

$$\begin{aligned} \Gamma _2(f):= \Gamma _2(f,f)= \frac{1}{2} \Delta _m \Gamma (f) - \Gamma (f,\Delta _m f). \end{aligned}$$

It is easy to see that

$$\begin{aligned} \Gamma (f,g)(x)=\frac{1}{2}\int _X\nabla f(x,y)\nabla g(x,y)dm_x(y), \end{aligned}$$

and

$$\begin{aligned} \Gamma (f)(x) = \frac{1}{2} \int _X \vert \nabla f(x,y) \vert ^2 dm_x(y). \end{aligned}$$

Consequently,

$$\begin{aligned} \int _X \Gamma (f,g)(x) d\nu (x) = {\mathcal {E}}_m (f,g) \quad \hbox {and}\ \int _X \Gamma (f)(x) d\nu (x) = {\mathcal {H}}_m (f). \end{aligned}$$
(3.5)

Furthermore, by (3.1) and (3.5), we get

$$\begin{aligned} \int _X \Gamma _2(f)\, d\nu&= \frac{1}{2} \int _X \left( \Delta _m \Gamma (f) - 2 \Gamma (f,\Delta _m f) \right) \, d \nu \\&= - \int _X \Gamma (f,\Delta _m f) \, d\nu = - {\mathcal {E}}_m (f,\Delta _m f), \end{aligned}$$

and, therefore,

$$\begin{aligned} \int _X \Gamma _2(f)\, d\nu = \int _X (\Delta _m f)^2 \, d\nu . \end{aligned}$$
(3.6)

Definition 3.16

The operator \(\Delta _m\) satisfies the Bakry–Émery curvature-dimension condition BE(Kn) for \(n \in (1, +\infty )\) and \(K \in {\mathbb {R}}\) if

$$\begin{aligned} \Gamma _2(f) \ge \frac{1}{n} (\Delta _m f)^2 + K \Gamma (f) \quad \forall \, f \in L^2(X, \nu ). \end{aligned}$$

The constant n is the dimension of the operator \(\Delta _m\), and K is said to be the lower bound of the Ricci curvature of the operator \(\Delta _m\). If there exists \(K \in {\mathbb {R}}\) such that

$$\begin{aligned} \Gamma _2(f) \ge K \Gamma (f) \quad \forall \, f \in L^2(X, \nu ), \end{aligned}$$

then it is said that the operator \(\Delta _m\) satisfies the Bakry–Émery curvature-dimension condition \(BE(K, \infty )\).

The use of the Bakry–Émery curvature-dimension condition as a possible definition of Ricci curvature in Markov chains was first considered in 1998 by Schmuckenschlager [54]. This concept of Ricci curvature in the discrete setting has been frequently used following the work by Lin and Yau [41] (see [40] and the references therein).

Integrating the Bakry–Émery curvature-dimension condition BE(Kn) we have

$$\begin{aligned} \int _X \Gamma _2(f)\, d\nu \ge \frac{1}{n} \int _X (\Delta _m f)^2 \, d\nu + K \int _X \Gamma (f) \, d\nu . \end{aligned}$$

Now, by (3.5) and (3.6), this inequality can be rewritten as

$$\begin{aligned} \int _X (\Delta _m f)^2 \, d\nu \ge \frac{1}{n} \int _X (\Delta _m f)^2 \, d\nu + K {\mathcal {H}}_m (f), \end{aligned}$$

or, equivalently, as

$$\begin{aligned} K\frac{n}{n-1} {\mathcal {H}}_m (f) \le \int _X (\Delta _m f)^2 \, d\nu . \end{aligned}$$
(3.7)

Similarly, integrating the Bakry–Émery curvature-dimension condition \(BE(K, \infty )\) we have

$$\begin{aligned} K {\mathcal {H}}_m (f) \le \int _X (\Delta _m f)^2 \, d\nu . \end{aligned}$$
(3.8)

We call the inequalities (3.7) and (3.8) the integrated Bakry–Émery curvature-dimension conditions.

Theorem 3.17

[46] Let \([X, d, m,\nu ]\) be a metric random walk space. Assume that \(\Delta _m\) is ergodic. Then, 

$$\begin{aligned} \mathrm{gap}(-\Delta _m) = \sup \left\{ \lambda \ge 0 \, : \, \lambda {\mathcal {H}}_m(f) \le \int _X (-\Delta _m f)^2 d\nu \ \ \forall f \in L^2(X,\nu ) \right\} . \end{aligned}$$

Consequently, on account of Theorem 3.17, we have the following result.

Theorem 3.18

[46] Let \([X, d, m,\nu ]\) be a metric random walk with \(\nu \) a probability measure. Assume that \(\Delta _m\) is ergodic. Then, 

  1. (1)

    \(\Delta _m\) satisfies an integrated Bakry–Émery curvature-dimension condition BE(Kn) with \(K >0\) if,  and only if,  a Poincaré inequality with constant \(K\frac{n}{n-1}\) is satisfied.

  2. (2)

    \(\Delta _m\) satisfies an integrated Bakry–Émery curvature-dimension condition \(BE(K, \infty )\) with \(K >0\) if,  and only if,  a Poincaré inequality with constant K is satisfied.

Therefore,  if \(\Delta _m\) satisfies the Bakry–Émery curvature-dimension condition BE(Kn) with \(K>0,\) we have

$$\begin{aligned} \mathrm{gap}(- \Delta _m) \ge K\frac{n}{n-1}. \end{aligned}$$

In the case that \(\Delta _m\) satisfies the Bakry–Émery curvature-dimension condition \(BE(K,\infty )\) with \(K >0,\) we have

$$\begin{aligned} \mathrm{gap}(- \Delta _m) \ge K. \end{aligned}$$

In [46] there is an example that shows that, in general, the integrated Bakry–Émery curvature-dimension condition BE(Kn) with \(K >0\) does not imply the Bakry–Émery curvature-dimension condition BE(Kn) with \(K >0\).

4 The total variation flow on metric random walk spaces

4.1 Perimeter, curvature and total variation

Let \([X,d,m,\nu ]\) be a metric random walk space. We define the m -interaction between two \(\nu \)-measurable subsets A and B of X as

$$\begin{aligned} L_m(A,B):= \int _A \int _B dm_x(y) d\nu (x). \end{aligned}$$

Whenever \(L_m(A,B) < +\infty \), by the reversibility assumption on \(\nu \) with respect to m, we have that

$$\begin{aligned} L_m(A,B)=L_m(B,A). \end{aligned}$$

With this concept in mind we have that (see [46, Proposition 2.11]) a metric random walk space \([X, d, m,\nu ]\) is m-connected if, and only if, for any pair of Borel sets \( A, B\subset X\) satisfying \(A\cup B=X\) and \(L_m(A,B)= 0\), either \(\nu (A)=0\) or \(\nu (B)=0\).

Let us now introduce the concept of perimeter of a set in this general setting.

Definition 4.1

We define the m-perimeter of a \(\nu \)-measurable subset \(E \subset X\) as

$$\begin{aligned} P_m(E):=L_m(E,X\setminus E) = \int _E \int _{X\setminus E} dm_x(y) d\nu (x). \end{aligned}$$

It is easy to see that

Moreover, if \(\nu (E)<+\infty \), we have

$$\begin{aligned} P_m(E)=\nu (E) -\int _E\int _E dm_x(y) d\nu (x). \end{aligned}$$

We may motivate this notion of perimeter as follows. For a population with starting distribution \(\nu \) which moves according to the law provided by the random walk m, \(L_m(A,B)\) measures how many individuals are moving from A to B, and, thanks to the reversibility of \(\nu \) for m, this is equal to the amount of individuals moving from B to A. In this regard, the m-perimeter measures the total flux of individuals that cross the “boundary” (in a very weak sense) of a set.

Consider the metric random walk space \([V(G), d_G, m^G,\nu ^G ]\) associated to a finite weighted discrete graph G. Given \(A, B \subset V(G)\), \(\mathrm{Cut}(A,B)\) is defined as

$$\begin{aligned} \mathrm{Cut}(A,B):= \sum _{x \in A, y \in B} w_{xy} = L_{m^G}(A,B), \end{aligned}$$

and the perimeter of a set \(E \subset V(G)\) is given by

$$\begin{aligned} \vert \partial E \vert := \mathrm{Cut}(E,E^c) = \sum _{x \in E, y \in V \setminus E} w_{xy}. \end{aligned}$$

Consequently, we have that

$$\begin{aligned} \vert \partial E \vert = P_{m^G}(E) \quad \hbox {for all} \ E \subset V(G). \end{aligned}$$

In the case of Example 2.2(1), this concept is the same as the one studied in [45] and whose origin goes back to [12, 15, 24].

In the same spirit, we define a nonlocal notion of the mean curvature of the boundary of a set.

Definition 4.2

Let \(E \subset X\) be a \(\nu \)-measurable set. For a point \(x \in X\) we define the m-mean curvature of \(\partial E\) at x as

Observe that

$$\begin{aligned} H^m_{\partial E}(x) = 1 - 2 \int _E dm_x(y). \end{aligned}$$

Finally, associated to the random walk \(m=(m_x)\) and the invariant measure \(\nu \), we define the following space of bounded nonlocal variation functions

$$\begin{aligned} BV_m(X,\nu ):= \left\{ u :X \rightarrow {\mathbb {R}}\ \nu \hbox { -measurable} \, : \, \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x) < \infty \right\} , \end{aligned}$$

which satisfies \(L^1(X,\nu )\subset BV_m(X,\nu )\). The m-total variation of a function \(u\in BV_m(X,\nu )\) is then defined by

$$\begin{aligned} TV_m(u):= \frac{1}{2} \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x) \end{aligned}$$

and, as in the local case, we have that

Example 4.3

Let \([V(G), d_G, (m^G_x),\nu _G]\) the metric random walk space associated to a finite weighted discrete graph G. Then,

$$\begin{aligned} TV_{m^G} (u)= & {} \frac{1}{2} \int _{V(G)} \int _{V(G)} \vert u(y) - u(x) \vert dm^G_x(y) d\nu _G(x)\\= & {} \frac{1}{2} \int _{V(G)} \frac{1}{d_x} \left( \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}\right) d\nu _G(x) \\= & {} \frac{1}{2} \sum _{x \in V(G)} d_x \left( \frac{1}{d_x} \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}\right) \\= & {} \frac{1}{2} \sum _{x \in V(G)} \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}, \end{aligned}$$

which coincides with the anisotropic total variation defined in [32].

As in the local case, we have the following coarea formula relating the total variation of a function with the perimeter of its superlevel sets.

Theorem 4.4

(Coarea formula) [48] For any \(u \in L^1(X,\nu ),\) let \(E_t(u):= \{ x \in X \ : \ u(x) > t \}.\) Then, 

$$\begin{aligned} TV_m(u) = \int _{-\infty }^{+\infty } P_m(E_t(u))\, dt. \end{aligned}$$

4.2 The 1-Laplacian and the TVF

Let \([X,d,m,\nu ]\) be an m-connected metric random walk space.

As motivation, consider the formal nonlocal evolution equation

$$\begin{aligned} u_t(x,t) = \int _{X} \frac{u(y,t) - u(x,t)}{\vert u(y,t) - u(x,t) \vert } dm_x(y), \quad x \in X,\ t \ge 0. \end{aligned}$$
(4.1)

In order to study the Cauchy problem associated to this equation, we will see in Theorem 4.8 that we can rewrite it as the gradient flow in \(L^2(X,\nu )\) of the functional \({\mathcal {F}}_m : L^2(X, \nu ) \rightarrow ]-\infty , + \infty ]\) defined by

$$\begin{aligned} {\mathcal {F}}_m(u):= \left\{ \begin{array}{ll} TV_m(u) &{}\quad \hbox {if} \ u\in L^2(X,\nu )\cap BV_m(X,\nu ), \\ + \infty &{}\quad \hbox {if}\ u\in L^2(X,\nu )\setminus BV_m(X,\nu ), \end{array} \right. \end{aligned}$$

which is convex and lower semi-continuous. Following the method used in [3], the subdifferential of the functional \({\mathcal {F}}_m\) is characterized as follows.

Theorem 4.5

[48] Let \(u \in L^2(X,\nu )\) and \(v \in L^2(X,\nu )\). The following assertions are equivalent : 

  1. (i)

    \(v \in \partial {\mathcal {F}}_m (u);\)

  2. (ii)

    there exists \({\mathbf{z}}\in X_m^2(X),\) \(\Vert {\mathbf{z}}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\) such that

    $$\begin{aligned} v = - \mathrm{div}_m {\mathbf{z}}\end{aligned}$$
    (4.2)

    and

    $$\begin{aligned} \int _{X} u(x) v(x) d\nu (x) = {\mathcal {F}}_m (u); \end{aligned}$$
  3. (iii)

    there exists \({\mathbf{z}}\in X_m^2(X),\) \(\Vert {\mathbf{z}}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\) such that (4.2) holds and

    $$\begin{aligned} {\mathcal {F}}_m (u) = \frac{1}{2}\int _{X \times X} \nabla u(x,y) {\mathbf{z}}(x,y) d(\nu \otimes m_x)(x,y); \end{aligned}$$
  4. (iv)

    there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) such that

    $$\begin{aligned} -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= v(x) \quad \hbox {for }\nu -\text{ a.e } x\in X, \end{aligned}$$
    (4.3)

    and

    $$\begin{aligned} -\int _{X} \int _{X}{} \mathbf{g}(x,y)dm_x(y)\,u(x)d\nu (x)={\mathcal {F}}_m(u). \end{aligned}$$
  5. (v)

    there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) verifying (4.3) and

    $$\begin{aligned} \mathbf{g}(x,y) \in \mathrm{sign}(u(y) - u(x)) \quad \hbox {for }(\nu \otimes m_x)-a.e. \ (x,y) \in X \times X. \end{aligned}$$

The m-1-Laplacian is defined via this subdifferential in the following manner.

Definition 4.6

We define in \(L^2(X,\nu )\) the multivalued operator \(\Delta ^m_1\) by

\((u, v ) \in \Delta ^m_1\) if, and only if, \(-v \in \partial {\mathcal {F}}_m(u)\).

As usual, we will write \(v\in \Delta ^m_1 u\) for \((u,v)\in \Delta ^m_1\).

Chang in [16] and Hein and Bühler in [36] defined a similar operator in the particular case of finite graphs.

Example 4.7

Let \([V(G), d_G, (m^G_x)]\) be the metric random walk space given in Example 2.2(3) with invariant measure \(\nu _G\). By Theorem 4.5, we have that \((u, v ) \in \Delta ^{m^G}_1\) if, and only if, there exists \(\mathbf{g}\in L^\infty (V(G)\times V(G), \nu _G \otimes m^G_x)\) antisymmetric with

$$\begin{aligned} \Vert \mathbf{g} \Vert _{L^\infty (V(G)\times V(G), \nu _G \otimes m^G_x)}\le 1 \end{aligned}$$

such that

$$\begin{aligned} \frac{1}{d_x}\sum _{y \in V(G)}\mathbf{g}(x,y) w_{xy}= v(x) \quad \forall \, x\in V(G), \end{aligned}$$

and

$$\begin{aligned} \mathbf{g}(x,y) \in \mathrm{sign}(u(y) - u(x)) \quad \hbox {for }(\nu _G \otimes m^G_x)-a.e. \ (x,y) \in V(G) \times V(G). \end{aligned}$$

As a consequence of Theorem 4.5, we can give the following existence and uniqueness result for the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t - \Delta ^m_1 u \ni 0 &{}\quad \hbox {in} \ (0,T) \times X\\ u(0,x) = u_0 (x) &{}\quad \hbox {for } x \in X, \end{array}\right. \end{aligned}$$
(4.4)

which is a rewrite of the formal expression (4.1).

Theorem 4.8

For every \(u_0 \in L^2( X,\nu )\) and any \(T>0,\) there exists a unique solution of the Cauchy problem (4.4) in (0, T) in the following sense :  \(u \in W^{1,1}(0,T; L^2(X,\nu )),\) \(u(0, \cdot ) = u_0\) in \(L^2(X,\nu ),\) and,  for almost all \(t \in (0,T),\)

$$\begin{aligned} u_t(t,\cdot ) - \Delta ^m_1 u(t) \ni 0. \end{aligned}$$

Moreover,  we have the following contraction and maximum principle in any \(L^q(X,\nu )\)-space,  \(1\le q\le \infty {:}\)

$$\begin{aligned} \Vert (u(t)-v(t))^+\Vert _{L^q(X,\nu )}\le \Vert (u_0-v_0)^+\Vert _{L^q(X,\nu )}\quad \forall \, 0<t<T, \end{aligned}$$

for any pair of solutions,  \(u,\, v,\) of problem (4.4) with initial data \(u_0,\, v_0\) respectively.

Definition 4.9

Given \(u_0 \in L^2(X, \nu )\), we denote by \(e^{t \Delta ^m_1}u_0\) the unique solution of problem (4.4) and we call the semigroup \(\{e^{t\Delta ^m_1} \}_{t \ge 0}\) in \(L^2(X, \nu )\) the Total Variational Flow in the metric random walk space \([X,d,m,\nu ]\).

4.3 Asymptotic behaviour of the TVF and Poincaré type inequalities

Let \([X,d,m,\nu ]\) be an m-connected metric random walk space.

Definition 4.10

We say that \([X,d,m,\nu ]\) satisfies a (pq)-Poincaré inequality (\(p, q\in [1,+ \infty [\)) if there exists a constant \(c>0\) such that, for any \(u \in L^q(X,\nu )\),

$$\begin{aligned} \left\| u \right\| _{L^p(X,\nu )} \le c\left( \left( \int _{X}\int _{X} |u(y)-u(x)|^q dm_x(y) d\nu (x) \right) ^{\frac{1}{q}}+\left| \int _X u\,d\nu \right| \right) , \end{aligned}$$

or, equivalently, there exists a \(\lambda > 0\) such that

$$\begin{aligned} \lambda \left\| u - \nu (u) \right\| _{L^p(X, \nu )} \le \Vert \nabla u \Vert _{L^q(X \times X, \nu \otimes m_x) } \quad \hbox {for all} \ u \in L^q(X,\nu ). \end{aligned}$$

When \([X,d,m,\nu ]\) satisfies a (p, 1)-Poincaré inequality, we will say that \([X,d,m,\nu ]\) satisfies a p-Poincaré inequality and write

$$\begin{aligned} \lambda ^p_{[X,d,m,\nu ]} : = \inf \left\{ \frac{TV_m(u)}{\Vert u \Vert _{L^p(X, \nu )}} \ : \ \Vert u \Vert _{L^p(X, \nu )}\not = 0, \ \int _X u(x) d \nu (x) = 0 \right\} . \end{aligned}$$
(4.5)

If a Poincaré inequality is satisfied then we obtain the following result on the asymptotic behaviour of the total variation flow.

Theorem 4.11

[48] If \([X,d,m,\nu ]\) satisfies a 1-Poincaré inequality,  then,  for any \(u_0 \in L^2(X, \nu ),\)

$$\begin{aligned} \left\| e^{t\Delta ^m_1}u_0 - \nu (u_0) \right\| _{L^1(X, \nu )} \le \frac{1}{2 \lambda ^{1}_{[X,d,m,\nu ]} } \frac{\Vert u_0 \Vert ^2_{L^2(X, \nu )}}{t} \quad \hbox {for all} \ t >0. \end{aligned}$$

The following result provides sufficient conditions for a Poincaré inequality to be satisfied by a metric random walk space. Two examples of metric random walk spaces in which a 1-Poincaré inequality is not satisfied are given in [48].

Theorem 4.12

[48] Suppose that \(\nu \) is a probability measure and

$$\begin{aligned} m_x\ll \nu \quad \hbox {for all }x\in X. \end{aligned}$$

Let (H1) and (H2) denote the following hypothesis.

  1. (H1)

    Given a \(\nu \)-null set B,  there exist \(x_1,x_2,\ldots , x_N\in X\setminus B\) and \(\alpha >0\) such that \(\nu \le \alpha (m_{x_1}+\cdots +m_{x_N})\).

  2. (H2)

    Let \( 1\le p<q\). Given a \(\nu \)-null set B,  there exist \(x_1,x_2,\ldots , x_N\in X\setminus B\) and \(\nu \)-measurable sets \(\Omega _1,\Omega _2,\ldots ,\Omega _N\subset X,\) such that \( X= \bigcup _{i=1}^N\Omega _i\) and \(\frac{dm_{x_i}}{d\nu }\in L^{\frac{p}{q-p}}(\Omega _i),\) \(i=1,2,\ldots ,N\).

Then,  if (H1) holds,  we have that \([X,d,m,\nu ]\) satisfies a (pp)-Poincaré inequality for every \(p\ge 1,\) and,  if (H2) holds,  then \([X,d,m,\nu ]\) satisfies a (pq)-Poincaré inequality.

In [55] the condition “\(m_x\ll \nu \) for all \(x\in X\)” has been weakened to \( \nu \left( \left\{ x\in X \, : \, m_x\perp \nu \right\} \right) =0. \)

Example 4.13

Let \([V(G),d_G,m^G, \nu _G]\) the metric random walk space associated to a finite weighted connected graph G. Then,

$$\begin{aligned} \lambda ^{2}_{[V(G),d_G,m^G, \nu _G]} =\inf \left\{ \frac{TV_{m^G}(u)}{\Vert u \Vert _{L^2(V(G),\nu _G)}} \ : \ \Vert u \Vert _{L^2(V(G),\nu _G)} \not = 0, \ \int _V u(x) d \nu _G(x) = 0 \right\} >0. \end{aligned}$$

Therefore, \([V(G),d_G,m^G, \nu _G]\) satisfies a 2-Poincaré inequality.

Corollary 4.14

Under the hypothesis of Theorem 4.12, for any \(u_0 \in L^2(X, \nu ),\)

$$\begin{aligned} \left\| e^{t\Delta ^m_1}u_0 - \nu (u_0) \right\| _{L^1(X, \nu )} \le \frac{1}{2\lambda ^1_{[X,d,m,\nu ]} } \frac{\Vert u_0 \Vert ^2_{L^2(X, \nu )}}{t} \quad \hbox {for all} \ t >0. \end{aligned}$$

Let us see that, when \([X,d,m,\nu ]\) satisfies a 2-Poincaré inequality, the total variation flow reaches the steady state in finite time.

Theorem 4.15

[48] Let \([X,d,m,\nu ]\) be a metric random walk. If \([X,d,m,\nu ]\) satisfies a 2-Poincaré inequality then,  for any \(u_0 \in L^2(X, \nu ),\)

$$\begin{aligned} \Vert e^{t\Delta ^m_1}u_0-\nu (u_0)\Vert _{L^2(X,\nu )}\le \left( \Vert u_0-\nu (u_0)\Vert _{L^2(X,\nu )}-\lambda ^{2}_{[X,d,m,\nu ]}t\right) ^+\quad \hbox {for all }t \ge 0, \end{aligned}$$

where \(\lambda ^{2}_{[X,d,m,\nu ]}\) is given in (4.5). Consequently, 

$$\begin{aligned} e^{t\Delta ^m_1}u_0=\nu (u_0)\quad \forall \, t\ge {\hat{t}}:=\frac{\left\| u_0-\nu (u_0)\right\| _{L^2(X,\nu )}}{\lambda ^{2}_{[X,d,m,\nu ]}}. \end{aligned}$$

Therefore, if we define the extinction time as

$$\begin{aligned} T^*(u_0):= \inf \left\{ t >0 \ : \ e^{t\Delta ^m_1}u_0 = \nu (u_0) \right\} , \quad u_0\in L^2(X,\nu ), \end{aligned}$$

then, under the conditions of Theorem 4.15, we have that, for \(u_0\in L^2(X,\nu )\),

$$\begin{aligned} T^*(u_0) \le \frac{\left\| u_0-\nu (u_0)\right\| _{L^2(X,\nu )}}{\lambda ^{2}_{[X,d,m,\nu ]}} . \end{aligned}$$

To obtain a lower bound on the extinction time, we introduce the following norm which, in the continuous setting, was introduced in [50]. Given a function \(f \in L^2(X, \nu )\), we define

$$\begin{aligned} \Vert f \Vert _{m,*}:= \sup \left\{ \int _X f(x) u(x) d\nu (x) : u \in L^2(X, \nu )\cap BV_m(X,\nu ), \ TV_m(u) \le 1\right\} . \end{aligned}$$

Theorem 4.16

[48] Let \(u_0 \in L^2(X, \nu )\). If \(T^*(u_0) < \infty \) then

$$\begin{aligned} T^*(u_0) \ge \Vert u_0 - \nu (u_0)\Vert _{m,*}. \end{aligned}$$

4.4 m-Cheeger and m-calibrable sets

Let \([X,d,m,\nu ]\) be an m-connected metric random walk space.

Definition 4.17

Given a set \(\Omega \subset X\) with \(0< \nu (\Omega ) < \nu (X)\), we define the m-Cheeger constant of \(\Omega \) as

$$\begin{aligned} h_1^m(\Omega ) := \inf \left\{ \frac{P_m(E)}{\nu (E)} \, : \, E \subset \Omega , \ E \ \nu \hbox {-measurable with } \, \nu ( E)>0 \right\} . \end{aligned}$$
(4.6)

A \(\nu \)-measurable set \(E \subset \Omega \) achieving the infimum in (4.6) is called an m-Cheeger set of \(\Omega \). Furthermore, we say that \(\Omega \) is m-calibrable if it is an m-Cheeger set of itself, that is, if

$$\begin{aligned} h_1^m(\Omega ) = \frac{P_m(\Omega )}{\nu (\Omega )}. \end{aligned}$$

For ease of notation, we will denote

$$\begin{aligned} \lambda ^m_\Omega := \frac{P_m(\Omega )}{\nu (\Omega )}, \end{aligned}$$

for any \(\nu \)-measurable set \(\Omega \subset X\) with \(0<\nu (\Omega )<\nu (X)\).

It is well known (see [29]) that the classical Cheeger constant

$$\begin{aligned} h_1(\Omega ):= \inf \left\{ \frac{Per(E)}{\vert E \vert } \, : \, E\subset \Omega , \ \vert E \vert >0 \right\} , \end{aligned}$$

for a bounded smooth domain \(\Omega \), is an optimal Poincaré constant, namely, it coincides with the first eigenvalue of the 1-Laplacian:

$$\begin{aligned} h_1(\Omega )={\Lambda }_1(\Omega ):= \inf \left\{ \frac{\int _\Omega \vert Du \vert +\int _{\partial \Omega } \vert u \vert d {\mathcal {H}}^{N-1}}{ \Vert u \Vert _{L^1(\Omega )}} \, : \, u \in BV(\Omega ), \ \Vert u \Vert _{L^\infty (\Omega )} = 1 \right\} . \end{aligned}$$

In order to get a nonlocal version of this result, we introduce the following constant. For \(\Omega \subset X\) with \(0<\nu (\Omega )< \nu (X)\), we define

$$\begin{aligned} \Lambda _1^m(\Omega )= & {} \inf \left\{ TV_m(u) \ : \ u \in L^1(X,\nu ), \ u= 0 \ \hbox {in} \ X \setminus \Omega , \ u \ge 0, \ \int _X u(x) d\nu (x) = 1 \right\} \\= & {} \inf \left\{ \frac{ TV_m (u)}{ \int _X u(x) d\nu (x)} \ : \ u \in L^1(X,\nu ), \ u= 0 \ \hbox {in} \ X \setminus \Omega ,\ u \ge 0, \ u\not \equiv 0 \right\} . \end{aligned}$$

Theorem 4.18

[48] Let \(\Omega \subset X\) with \(0< \nu (\Omega ) < \nu (X)\). Then, 

$$\begin{aligned} h_1^m(\Omega ) = \Lambda _1^m(\Omega ). \end{aligned}$$

Let us recall that, in the local case, a set \(\Omega \subset {\mathbb {R}}^N\) is called calibrable if

$$\begin{aligned} \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } = \inf \left\{ \frac{\text{ Per }(E)}{\vert E\vert } \ : \ E \subset \Omega , \ E \ \hbox { with finite perimeter,} \ \vert E \vert > 0 \right\} . \end{aligned}$$

The following characterization of convex calibrable sets is proved in [1].

Theorem 4.19

[1] Given a bounded convex set \(\Omega \subset {\mathbb {R}}^N\) of class \(C^{1,1},\) the following assertions are equivalent : 

  1. (a)

    \(\Omega \) is calibrable.

  2. (b)

    satisfies \(,\) where \(\Delta _1 u:= \mathrm{div} \left( \frac{Du}{\vert Du \vert }\right) \).

  3. (c)

    \( (N-1) \underset{x \in \partial \Omega }{\mathrm{ess\, sup}} H_{\partial \Omega } (x) \le \frac{\text{ Per }(\Omega )}{\vert \Omega \vert }.\)

The next result is the nonlocal version of the fact that (a) is equivalent to (b) in Theorem 4.19.

Theorem 4.20

[48] Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). Then,  the following assertions are equivalent : 

  1. (i)

    \(\Omega \) is m-calibrable, 

  2. (ii)

    there exists a \(\nu \)-measurable function \(\tau \) equal to 1 in \(\Omega \) such that

  3. (iii)

    for

    $$\begin{aligned} \tau ^*(x)=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } x\in \Omega ,\\ - \frac{1}{\lambda _\Omega ^m} m_x(\Omega )&{}\quad \hbox {if } x\in X\setminus \Omega . \end{array} \right. \end{aligned}$$

Remark 4.21

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). We have shown,

If \(\nu (X)<\infty \), as a consequence of the above relation and the m-connectedness of the metric random walk space, we have that

does not hold true for any \(\nu \)-measurable set \(\Omega \) with \(0<\nu (\Omega )<\nu (X)\).

We also have:

Proposition 4.22

[48] Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X) \). Then, 

$$\begin{aligned} \Omega \ m\hbox {-calibrable} \ \Rightarrow \ \frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega )d\nu (x) \le 2\,\nu -\underset{x\in \Omega }{\mathrm{ess\ inf}}\ m_x(\Omega ). \end{aligned}$$

The above result relates the m-calibrability with the m-mean curvature, since it can be rewritten as

$$\begin{aligned} \Omega \ m\hbox { -calibrable} \ \Rightarrow \ \nu -\underset{x\in \Omega }{\mathrm{ess\,sup}} \ H^{m}_{\partial \Omega }(x) \le \lambda ^m_\Omega . \end{aligned}$$

Therefore, this is the nonlocal version of one of the implications in the equivalence between (a) and (c) in Theorem 4.19. However, the converse of Proposition 4.22 is not true in general, an example is given in [44] (see also [45]) for \([{\mathbb {R}}^3, d, m^J]\), with d the Euclidean distance and . An example of a graph for which the converse of Proposition 4.22 is not true is also given.

4.5 The eigenvalue problem for the 1-Laplacian

In this section we introduce the eigenvalue problem associated with the 1-Laplacian \(\Delta ^m_1\) and its relation with the Cheeger minimization problem. For the particular case of finite weighted discrete graphs where the weights are either 0 or 1 this problem was first studied by Hein and Bühler ([36]); a more complete study was subsequently performed by Chang in [16].

Let \([X,d,m,\nu ]\) be an m-connected metric random walk space.

Definition 4.23

A pair \((\lambda , u) \in {\mathbb {R}}\times L^2(X, \nu )\) is called an m -eigenpair of the 1-Laplacian \(\Delta ^m_1\) on X if \(\Vert u \Vert _{L^1(X,\nu )} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) (i.e., \(\xi (x) \in \mathrm{sign}(u(x))\) for every \(x\in X\)) such that

$$\begin{aligned} \lambda \, \xi \in \partial {\mathcal {F}}_m(u) = - \Delta ^m_1 u. \end{aligned}$$

The function u is called an m-eigenfunction and \(\lambda \) an m-eigenvalue associated to u.

We have the following relation between m-calibrable sets and m-eigenpairs of \(\Delta ^{m}_1\).

Theorem 4.24

[48] Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). We have : 

  1. (i)

    If is an m-eigenpair of \(\Delta _1^m,\) then \(\Omega \) is m-calibrable.

  2. (ii)

    If \(\Omega \) is m-calibrable and

    $$\begin{aligned} m_x(\Omega ) \le \lambda _\Omega ^m \quad \hbox {for }\nu \hbox {-a.e. } \ x \in X \setminus \Omega , \end{aligned}$$

    then is an m-eigenpair of \(\Delta _1^m\).

In [48] we give an example showing that, in Theorem 4.24, the reverse implications of (i) and (ii) are false in general.

Theorem 4.25

[48] If \((\lambda , u)\) is an m-eigenpair with \(\lambda >0\) and \(\nu (E_0(u)) > 0,\) then is an m-eigenpair,  \(\lambda =\lambda _{E_0(u)}^m\) and \(E_0(u)\) is m-calibrable. Moreover,  \(\nu (E_0(u))\le \frac{1}{2}\).

4.6 The m-Cheeger constant

Let [Xdm] be an m-connected metric random walk space with \(\nu \) a probability measure. Assuming that \(\nu (X)=1\) is not a loss of generality since, for \(\nu (X)<+\infty \), we may work with \(\frac{1}{\nu (X)}\nu \). Observe that the ratio \(\lambda _D^m=\frac{P_m(D)}{\nu (D)}\) remains unchanged if we normalize the measure, and the same is true for the m-eigenvalues of the 1-Laplacian.

For a locally finite weighted discrete graph \(G=(V(G), E(G))\) the Cheeger constant is defined as

$$\begin{aligned} h_G:= \inf _{ D \subset V(G)} \ \frac{\vert \partial D \vert }{\min \{ \nu _G(D), \nu _G(V(G) \setminus D)\}}. \end{aligned}$$

In [20] (see also [9]), the following relation between the Cheeger constant and the first positive eigenvalue \(\lambda _1(G)\) of the graph Laplacian \(\Delta _{m^G}\) is proved:

$$\begin{aligned} \frac{h_G^2}{2}\le \lambda _1(G) \le 2 h_G. \end{aligned}$$
(4.7)

In this general context we define the following concept, which is consistent with the above definition on graphs.

Definition 4.26

The m-Cheeger constant of \([X,d,m,\nu ]\) is defined as

$$\begin{aligned} h_m(X):= \inf \left\{ \frac{P_m (D)}{\min \{ \nu (D), \nu (X \setminus D)\}} \ : \ D \subset X, \ 0< \nu (D) < 1 \right\} . \end{aligned}$$
(4.8)

The above infimum is not attained in general, see, for instance, [48, Example 6.21].

We will now give a variational characterization of the m-Cheeger constant which generalizes the one obtained in [56] for the particular case of finite graphs. Recall that, given a function \(u : X \rightarrow {\mathbb {R}}\), \(\mu \in {\mathbb {R}}\) is a median of u with respect to \(\nu \) if

$$\begin{aligned} \nu (\{ x \in X \ : \ u(x) < \mu \}) \le \frac{1}{2} \nu (X) \quad \hbox {and}\quad \nu (\{ x \in X \ : \ u(x) > \mu \}) \le \frac{1}{2} \nu (X). \end{aligned}$$

We denote by \(\mathrm{med}_\nu (u)\) the set of all medians of u.

Theorem 4.27

[46] We have that

$$\begin{aligned} h_m(X) =\lambda _1^m(X) := \inf \Big \{ TV_m(u) \ : \ \Vert u \Vert _1 = 1, \ 0 \in \mathrm{med}_\nu (u) \Big \}. \end{aligned}$$

Following [20] and using Theorem 4.27, one can show that the Cheeger inequality (4.7) also holds in this context.

Theorem 4.28

[46] The following Cheeger inequality holds

$$\begin{aligned} \frac{h^2_m}{2} \le \mathrm{gap}(-\Delta _m) \le 2 h_m. \end{aligned}$$

The Poincaré inequality (3.4), applied for characteristic functions, implies that there exists \(\lambda >0\) such that

$$\begin{aligned} \lambda \,\nu (D)\big (1-\nu (D)\big )\le P_m(D)\quad \hbox {for every } \nu \text {-measurable set }D. \end{aligned}$$
(4.9)

Hence, since

$$\begin{aligned} \hbox {min}\{x,1-x\}\le 2x(1-x)\le 2\hbox {min}\{x,1-x\} \quad \hbox {for} \ 0\le x\le 1, \end{aligned}$$

inequality (4.9) implies the following isoperimetric inequality (see [2, Theorem 3.46]):

$$\begin{aligned} \hbox {min}\big \{\nu (D),1-\nu (D)\big \} \le \frac{2}{\lambda }P_m(D)\quad \hbox {for every }\nu \text {-measurable set } D. \end{aligned}$$
(4.10)

Definition 4.29

If there exists \(\lambda >0\) satisfying (4.10), we say that \([X,d,m, \nu ]\) satisfies an isoperimetric inequality.

In [46] we proved the following result.

Theorem 4.30

The following statements are equivalent : 

  1. (1)

    \([X,d,m, \nu ]\) satisfies a Poincaré inequality, 

  2. (2)

    \(\mathrm{gap}(-\Delta _m) > 0,\)

  3. (3)

    \([X,d,m, \nu ]\) satisfies an isoperimetric inequality, 

  4. (4)

    \(h_m(X) > 0.\)

Recall that, for finite graphs, it is well known that the first non-zero eigenvalue coincides with the Cheeger constant (see [16]), that is,

$$\begin{aligned} h_m(X) \ \hbox { is the first non-zero eigenvalue of} \ \Delta ^{m^G}_1. \end{aligned}$$
(4.11)

In our context we have:

Theorem 4.31

[48] If \(\lambda \not = 0\) is an m-eigenvalue of \(\Delta ^m_1\) then

$$\begin{aligned} h_m(X) \le \lambda . \end{aligned}$$

In the next result we see that if the infimum in (4.8) is attained then \(h_m(X)\) is an m-eigenvalue of \(\Delta ^m_1\).

Theorem 4.32

[48] Let \(\Omega \) be a \(\nu \)-measurable subset of X such that \(0<\nu (\Omega )\le \frac{1}{2}\).

  1. (i)

    If \(\Omega \) and \(X\setminus \Omega \) are m-calibrable then is an m-eigenpair of \(\Delta ^m_1\).

  2. (ii)

    If \(h_m(X)=\lambda ^m_\Omega \) then \(\Omega \) and \(X\setminus \Omega \) are m-calibrable.

  3. (iii)

    If \(h_m(X)=\lambda ^m_\Omega \) then is an m-eigenpair of \(\Delta ^m_1\).

As a consequence of Proposition 4.25 and Theorem 4.32 we have the following result.

Corollary 4.33

If \(h_m(X)\) is a positive m-eigenvalue of \(\Delta ^m_1,\) then,  for any eigenvector u associated to \(h_m(X)\) with \(\nu (E_0(u))>0,\)

is an m-eigenpair of \(\Delta ^m_1,\)

\(\nu (E_0(u))\le \frac{1}{2},\) and

$$\begin{aligned} h_m(X)=\lambda _{E_0(u)}^m. \end{aligned}$$

Moreover,  both \({E_0(u)}\) and \(X\setminus {E_0(u)}\) are m-calibrable.

In [48] we give an example showing that (4.11) is not true in general.

5 Evolution problems of Leray–Lions type with nonhomogeneous Neumann boundary conditions

Let \([X,d,m,\nu ]\) be an m-connected metric random walk space. We assume that \(m_x\ll \nu \) for all \(x\in X\).

Definition 5.1

Given a \(\nu \)-measurable set \(\Omega \subset X\), we define its m-boundary as

$$\begin{aligned} \partial _m\Omega :=\{ x\in X\setminus \Omega : m_x(\Omega )>0 \} \end{aligned}$$

and its m-closure as

$$\begin{aligned} \Omega _m:=\Omega \cup \partial _m\Omega . \end{aligned}$$

We will assume that \(\nu (\Omega _m) < +\infty \) in what follows.

5.1 Nonlocal Leray–Lions operators

For \(1<p<+\infty \), let us consider a function \(\mathbf{a}_p:X\times X\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) such that

$$\begin{aligned}&(x,y)\mapsto \mathbf{a}_p(x,y,r) \quad \hbox {is }\nu \otimes m_x\hbox {-measurable for all }r; \nonumber \\&\mathbf{a}_p(x,y,.)\hbox { is continuous for }\nu \otimes m_x\hbox {-a.e } (x,y)\in X\times X; \nonumber \\&\mathbf{a}_p(x,y,r)=-\mathbf{a}_p(y,x,-r) \quad \hbox {for }\nu \otimes m_x\hbox {-a.e } (x,y)\in X\times X\hbox { and for all }r; \nonumber \\&(\mathbf{a}_p(x,y,r)-\mathbf{a}_p(x,y,s))(r-s) > 0 \quad \hbox {for }\nu \otimes m_x\hbox {-a.e. }(x,y)\hbox { and for all }r\ne s;\nonumber \\ \end{aligned}$$
(5.1)

there exist constants \(c,C>0\) such that

$$\begin{aligned} |\mathbf{a}_p(x,y,r)|\le C\left( 1+|r|^{p-1}\right) \quad \hbox {for } \nu \otimes m_x\hbox {-a.e. }(x,y)\in X\times X\hbox { and for all }r, \end{aligned}$$

and

$$\begin{aligned} \mathbf{a}_p(x,y,r)r\ge c\vert r \vert ^p \quad \hbox {for }\nu \otimes m_x\hbox {-a.e. }(x,y)\in X\times X \hbox { and for all }r. \end{aligned}$$

This last condition implies that

$$\begin{aligned} \mathbf{a}_p(x,y,0)=0 \ \hbox {and} \ \hbox {sign}_0(\mathbf{a}_p(x,y,r))=\hbox {sign}_0(r) \quad \hbox {for } \nu \otimes m_x\hbox {-a.e. }(x,y)\in X\times X. \end{aligned}$$

An example of a function \(\mathbf{a}_p\) satisfying the above assumptions is

$$\begin{aligned} \mathbf{a}_p(x,y,r):=\frac{\varphi (x)+\varphi (y)}{2}|r|^{p-2}r, \end{aligned}$$

being \(\varphi :X\rightarrow {\mathbb {R}}\) a \(\nu \)-measurable function satisfying \(0<c\le \varphi \le C\) where c and C are constants. In particular, if \(\varphi =1\), we have that

$$\begin{aligned} \begin{array}{l} \hbox {div}_m\big (\mathbf{a}_p(x,y,u(y)-u(x)\big )(x)= \frac{1}{2}\int _{X} |u(y)-u(x)|^{p-2}(u(y)-u(x)) dm_x(y) \end{array} \end{aligned}$$

is the p-Laplacian operator on the metric random walk space.

5.2 Neumann boundary operators

We define the nonlocal Neumann boundary operator (of Gunzburger–Lehoucq type) by

$$\begin{aligned} {\mathcal {N}}^{\mathbf{a}_p}_1 u(x):= -\int _{\Omega _m} \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y) \quad \hbox {for } x \in \partial _m\Omega . \end{aligned}$$

We also define the nonlocal Neumann boundary operator (of Dipierro–Ros-Oton–Valdinoci type) as

$$\begin{aligned} {\mathcal {N}}^{\mathbf{a}_p}_2 u(x):= -\int _{\Omega } \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y) \quad \hbox {for } x \in \partial _m\Omega . \end{aligned}$$

These type of nonlocal boundary conditions where introduced, for the linear case, in [34] to develop a nonlocal vector calculus, and in [25] for diffusion with the fractional Laplacian.

For each of these Neumann boundary operators our main goal is to study the evolution problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x) = \hbox {div}_m\mathbf{a}_p u(t,x), &{}\quad x\in \Omega ,\ 0<t<T, \\ {\mathcal {N}}^{\mathbf{a}_p}_{\mathbf {j}} u(t,x) = \varphi (x), &{}\quad x\in \partial _m\Omega , \ 0<t<T, \\ u(0,x) = u_0(x), &{}\quad x\in \Omega , \end{array} \right. \end{aligned}$$
(5.2)

\({\mathbf {j}}=1\), 2, and the following associated Neumann problem

$$\begin{aligned} \left\{ \begin{array}{ll} u(x)-\hbox {div}_m\mathbf{a}_p u(x) = \varphi (x), &{}\quad x\in \Omega , \\ {\mathcal {N}}^{\mathbf{a}_p}_{\mathbf {j}} u(x) = \varphi (x), &{}\quad x\in \partial _m\Omega . \end{array} \right. \end{aligned}$$
(5.3)

In (5.2) and (5.3) we use the following simplified notation

$$\begin{aligned} \hbox {div}_m \mathbf{a}_p u(t,x):=\hbox {div}_m\big (\mathbf{a}_p(x,y,u(t,y)-u(t,x)\big )(x) \end{aligned}$$

and

$$\begin{aligned} \hbox {div}_m \mathbf{a}_p u(x):=\hbox {div}_m\big (\mathbf{a}_p(x,y,u(y)-u(x)\big )(x). \end{aligned}$$

Observe that \(\hbox {div}_m\mathbf{a}_p\) is a kind of Leray–Lions operator for the random walk m. On account of (5.1), we have that

$$\begin{aligned} \hbox {div}_m\mathbf{a}_p u (x)= & {} \frac{1}{2} \int _{X}\big (\mathbf{a}_p(x,y,u(y)-u(x)) - \mathbf{a}_p(y,x,u(x)-u(y))\big ) dm_x(y) \\= & {} \int _X \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y). \end{aligned}$$

Moreover, by the reversibility of \(\nu \) with respect to m, we have that \(m_x(X\setminus \Omega _m)=0\) for \(\nu \)-a.e. \(x\in \Omega \). Indeed,

$$\begin{aligned} \int _{\Omega }m_x(X\setminus \Omega _m)d\nu (x)=\int _{X\setminus \Omega _m}m_x(\Omega )d\nu (x)=0 . \end{aligned}$$

Consequently,

$$\begin{aligned} \hbox {div}_m\mathbf{a}_p u (x) =\int _{\Omega _m} \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y) \quad \hbox {for every } x\in \Omega . \end{aligned}$$

Let

$$\begin{aligned} Q_1=\Omega _m\times \Omega _m \end{aligned}$$

and

$$\begin{aligned} Q_2=(\Omega _m\times \Omega _m)\setminus (\partial _m\Omega \times \partial _m\Omega ). \end{aligned}$$

The following integration by parts formula follows by the reversibility of \(\nu \) with respect to m.

Proposition 5.2

[47] Let \({\mathbf {j}}\in \{1,2\}\). Let u be a \(\nu \)-measurable function such that

$$\begin{aligned} (x,y)\mapsto \mathbf{a}_p(x,y,u(y)-u(x))\in L^{q}(Q_{\mathbf {j}},\nu \otimes m_x) \end{aligned}$$

and let \(w \in L^{q'}(\Omega _m),\) then

$$\begin{aligned}&-\int _\Omega \hbox \mathrm{div}_m\mathbf{a}_p u (x)w(x)d\nu (x) +\int _{\partial _m\Omega } {\mathcal {N}}^{\mathbf{a}_p}_{\mathbf {j}} u(x) w(x)d\nu (x) \\&\quad = \frac{1}{2} \int _{Q_{\mathbf {j}}} \mathbf{a}_p(x,y,u(y)-u(x)) (w(y) - w(x)) d(\nu \otimes m_x)(x,y) . \end{aligned}$$

5.3 Neumann boundary conditions of Gunzburger–Lehoucq type

In this subsection we study the problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x) = \hbox {div}_m\mathbf{a}_p u(t,x), &{}\quad x\in \Omega ,\ 0<t<T, \\ {\mathcal {N}}^{\mathbf{a}_p}_1 u(t,x) = \varphi (x), &{}\quad x\in \partial _m\Omega , \ 0<t<T, \\ u(0,x) = u_0(x), &{}\quad x\in \Omega . \end{array} \right. \end{aligned}$$
(5.4)

We will assume that the following Poincaré type inequality holds: there exists a constant \(\lambda >0\) such that, for any \(u \in L^p(\Omega _m,\nu )\),

$$\begin{aligned} \left\| u \right\| _{L^p(\Omega _m,\nu )} \le \lambda \left( \left( \int _{Q_1} |u(y)-u(x)|^p d(\nu \otimes m_x)(x,y) \right) ^{\frac{1}{p}}+\left| \int _{\Omega } u\,d\nu \right| \right) \end{aligned}$$

or, equivalently,

$$\begin{aligned} \left\| u - \frac{1}{\nu (\Omega )} \int _\Omega u d\nu \right\| _{L^p(\Omega _m,\nu )} \le \lambda \left( \int _{Q_1} |u(y)-u(x)|^p d(\nu \otimes m_x)(x,y) \right) ^{\frac{1}{p}}. \end{aligned}$$

It is shown in [48] (see also [4, 5]) that, under rather general conditions, there are metric random walk spaces satisfying this kind of inequality.

The main tool to study Problem (5.4) is Nonlinear Semigroup Theory. In order to use this theory, we define the following operator on \( L^1(\Omega ,\nu )\times L^1(\Omega ,\nu )\) associated to the problem. Observe that the space of definition is \(L^1(\Omega ,\nu )\) and not \( L^1(\Omega _m,\nu )\).

Definition 5.3

Let \(\varphi \in L^1(\partial _m\Omega ,\nu )\). We say that \((u,v) \in B^m_{\mathbf{a}_p,\varphi }\) if \( u,v \in L^1(\Omega ,\nu )\) and there exists \( {{\overline{u}}}\in L^p(\Omega _m,\nu )\) (that we will denote equally as u) such that \({\overline{u}}_{\vert \Omega } = u\),

$$\begin{aligned} (x,y)\mapsto a_p(x,y,u(y)-u(x))\in L^{p'}(Q_1,\nu \otimes m_x) \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{array}{ll} -\hbox {div}_m\mathbf{a}_p u = v &{}\quad \hbox {in} \ \Omega , \\ {\mathcal {N}}^{\mathbf{a}_p}_1 u = \varphi &{}\quad \hbox {in} \ \partial _m\Omega ; \end{array} \right. \end{aligned}$$

that is,

$$\begin{aligned} v(x) = - \int _{\Omega _m} \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y), \quad x \in \Omega , \end{aligned}$$

and

$$\begin{aligned} \varphi (x) = -\int _{\Omega _m} \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y), \quad x \in \partial _m\Omega . \end{aligned}$$

Theorem 5.4

[47] Let \(\varphi \in L^{p'}(\partial _m\Omega ,\nu )\). The operator \(B^m_{\mathbf{a}_p,\varphi }\) is completely accretive and satisfies the range condition

$$\begin{aligned} L^{p'}(\Omega ,\nu )\subset R(I+ B^m_{\mathbf{a}_p,\varphi }). \end{aligned}$$

Consequently,  \(B^m_{\mathbf{a}_p,\varphi }\) is m-completely accretive in \(L^{p'}(\Omega ,\nu )\).

Theorem 5.5

[47] Let \(\varphi \in L^{p'}(\partial _m\Omega ,\nu )\). Then, 

$$\begin{aligned} \overline{D(B^m_{\mathbf{a}_p,\varphi })}^{L^{p'}(\Omega ,\nu )}=L^{p'}(\Omega ,\nu ). \end{aligned}$$

The following theorem is a consequence of the previous results thanks to the Nonlinear Semigroup Theory.

Theorem 5.6

Let \(\varphi \in L^{p'}(\partial _m\Omega ,\nu )\) and \(T>0\). For any \(u_0\in \overline{D(B^m_{\mathbf{a}_p,\varphi })}^{L^{p'}(\Omega ,\nu )}=L^{p'}(\Omega ,\nu )\) there exists a unique mild-solution u(tx) of Problem (5.4). Moreover,  for any \( q\ge p'\) and \(u_{0i}\in L^q(\Omega ,\nu ),\) \(i=1,2,\) we have the following contraction principle for the corresponding mild-solutions \(u_i{:}\)

$$\begin{aligned} \Vert (u_1(t,.)-u_2(t,.))^+\Vert _{L^q(\Omega ,\nu )}\le \Vert (u_{0,1}-u_{0,2})^+\Vert _{L^q(\Omega ,\nu )} \quad \hbox {for any } 0\le t< T. \end{aligned}$$

If \(u_0\in D(B^m_{\mathbf{a}_p,\varphi })\) then the mild-solution is a strong solution.

5.4 Neumann boundary conditions of Dipierro–Ros-Oton–Valdinoci type

The aim now is to study Problem 5.2 with \({\mathbf {j}}=2\), that is, with Neumann boundary conditions of Dipierro–Ros-Oton–Valdinoci type:

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x) = \hbox {div}_m\mathbf{a}_p u(t,x), &{}\quad x\in \Omega ,\ 0<t<T, \\ {{\mathcal {N}}^{\mathbf{a}_p}_2} u(t,x) = \varphi (x), &{}\quad x\in \partial _m\Omega , \ 0<t<T, \\ u(0,x) = u_0(x), &{}\quad x\in \Omega . \end{array} \right. \end{aligned}$$
(5.5)

Here we do not assume the existence of a Poincaré type inequality.

The following space of functions will play an important role for this problem.

Definition 5.7

Let

$$\begin{aligned} L^{m, \infty }(\partial _m\Omega ,\nu ):=\left\{ \varphi :\partial _m\Omega \rightarrow {\mathbb {R}}\ : \ \varphi \hbox { is }\nu \hbox {-measurable and } \frac{\varphi }{m_{(.)}(\Omega )}\in L^\infty (\partial _m\Omega , \nu ) \right\} . \end{aligned}$$

Note that

$$\begin{aligned} L^{m, \infty }(\partial _m\Omega ,\nu )\subset L^{\infty }(\partial _m\Omega ,\nu ). \end{aligned}$$

Example 5.8

If \([V,d_G,m^G]\) is the metric random walk space associated to a locally finite weighted discrete graph and \(\Omega \subset V\) then, if \(\partial _m\Omega \subset V\) is a finite set, we have that

$$\begin{aligned} L^{m,\infty }(\partial _m\Omega ,\nu )= L^{\infty }(\partial _m\Omega ,\nu ). \end{aligned}$$

For the metric random walk space \([{\mathbb {R}}^N, d, m^J]\) (suppose that \(\hbox {supp}(J)=B(0,R)\)), if \(\Omega \subset {\mathbb {R}}^N\) is a bounded domain and \(\Omega _r:=\{x\in {\mathbb {R}}^N : \hbox {dist}(x,\Omega )<r\}\), then

$$\begin{aligned} \left\{ \varphi \in L^{\infty }(\partial _m\Omega ,\nu ) \ : \ \hbox {supp}(\varphi )\subset \Omega _r, \ r<R \right\} \subset L^{m,\infty }(\partial _m\Omega ,\nu ) . \end{aligned}$$

To study Problem 5.5 we define the following operator in \( L^1(\Omega ,\nu )\times L^1(\Omega ,\nu )\).

Definition 5.9

Let \(1< p < \infty \). Let \(\varphi \in L^{m, \infty }(\partial _m\Omega ,\nu )\). We say that \((u,v) \in A^m_{\mathbf{a}_p,\varphi }\) if \( u,v \in L^1(\Omega ,\nu )\), and there exists a \(\nu \)-measurable function \({{\overline{u}}}\) in \(\Omega _m\) with \({\overline{u}}_{\vert \Omega } = u\) (that we denote equally as u) satisfying

$$\begin{aligned}&m_{(\cdot )} (\Omega )|u|^{p-1}\in L^1(\partial _m\Omega ,\nu ),\\&(x,y)\mapsto a_p(x,y,u(y)-u(x))\in L^{1}(Q_2,\nu \otimes m_x), \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{array}{ll} -\hbox {div}_m\mathbf{a}_p u = v &{}\quad \hbox {in}\ \Omega , \\ {\mathcal {N}}^{\mathbf{a}_p}_2 u = \varphi &{}\quad \hbox {in}\ \partial _m\Omega , \end{array} \right. \end{aligned}$$

that is,

$$\begin{aligned} v(x) = - \int _{\Omega _m} \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y), \quad x \in \Omega , \end{aligned}$$

and

$$\begin{aligned} \varphi (x) = -\int _{\Omega } \mathbf{a}_p(x,y,u(y)-u(x)) dm_x(y), \quad x \in \partial _m\Omega . \end{aligned}$$

Theorem 5.10

[47] Let \(\varphi \in L^{m, \infty }(\partial _m\Omega ,\nu )\). The operator \(A^m_{\mathbf{a}_p,\varphi }\) is completely accretive and satisfies the range condition

$$\begin{aligned} L^{p'}(\Omega ,\nu ) \subset R(I+ A^m_{\mathbf{a}_p,\varphi }). \end{aligned}$$

Theorem 5.11

[47] Let \(\varphi \in L^{m,\infty }(\partial _m\Omega ,\nu )\). Then,  we have

$$\begin{aligned} L^{\infty }(\Omega ,\nu )\subset D(A^m_{\mathbf{a}_p,\varphi }) \end{aligned}$$

and,  consequently, 

$$\begin{aligned} \overline{D(A^m_{\mathbf{a}_p,\varphi })}^{L^{p'}(\Omega ,\nu )}=L^{p'}(\Omega ,\nu ). \end{aligned}$$

For \(p\ge 2,\)

$$\begin{aligned} L^{p-1}(\Omega ,\nu )\subset D(A^m_{\mathbf{a}_p,\varphi }). \end{aligned}$$

The following theorem is a consequence of the previous results thanks to the Nonlinear Semigroups Theory.

Theorem 5.12

Let \(\varphi \in L^{m,\infty }(\partial _m\Omega ,\nu )\) and \(T>0\). For any \(u_0\in \overline{D(A^m_{\mathbf{a}_p,\varphi })}^{L^{p'}(\Omega ,\nu )}=L^{p'}(\Omega ,\nu )\) there exists a unique mild-solution u(tx) of Problem (5.5). Moreover,  for any \( q\ge p'\) and \(u_{0i}\in L^q(\Omega ,\nu ),\) \(i=1,2,\) we have the following contraction principle for the corresponding mild-solutions \(u_i{:}\)

$$\begin{aligned} \Vert (u_1(t,.)-u_2(t,.))^+\Vert _{L^q(\Omega ,\nu )}\le \Vert (u_{0,1}-u_{0,2})^+\Vert _{L^q(\Omega ,\nu )} \quad \hbox {for any } 0\le t< T. \end{aligned}$$

If \(u_0\in D(A^m_{\mathbf{a}_p,\varphi }),\) then the mild-solution is a strong solution. In particular,  if \(u_0\in L^{\infty }(\Omega ,\nu ),\) Problem (5.5) has a unique strong solution. For \(p\ge 2\) this is true for data in \(L^{p-1}(\Omega ,\nu )\).

In [55] doubly nonlocal diffusion problems of Leray–Lions type with further nonlinearities on the boundary have also been studied.