1 Introduction

Minimal solutions form a particular class of solutions to evolution problems possibly having more than one solution corresponding to given initial data. The idea is to select one particular solution corresponding to each given range of solutions.

The concept is introduced in ([13], Sect. 3) for gradient flows in Hilbert spaces, generated by continuously differentiable functions. In [13], the reverse approximation of gradient flows as minimizing movements is studied; the notion of minimal solutions proves crucial in the considerations therein.

In the present paper, an abstract approach is taken with the aim of introducing the concept of minimal solutions to a wide variety of evolution problems with nonunique solutions, on a topological space \(\mathscr {S}\), endowed with a Hausdorff topology.

Minimal solutions A partial order \(\succ\) between solutions \(u:[0, +\infty ) \rightarrow \mathscr {S}\) sharing the same range \(\mathscr {R}= \mathscr {R}[u] := u([0, +\infty ))\) in \(\mathscr {S}\) is defined. We say that \(u\succ v\) if there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) with \({\mathsf {z}}(0) = 0\) such that

$$\begin{aligned} u(t) = v({\mathsf {z}}(t)) \quad \text { for all } t\ge 0. \end{aligned}$$
(1.1)

A solution u is minimal if for every solution v, \(u\succ v\) yields \(u = v\). (see Definition 3.6)

Within the abstract framework of generalized \(\Lambda\)-semiflow (introduced in Sect. 3.1), it is shown that, under natural hypotheses,

  1. 1.

    there exists a unique minimal solution corresponding to each range \(\mathscr {R}= \mathscr {R}[u]\),

  2. 2.

    each minimal solution induces all other solutions with the same range by time reparametrization (1.1), and

  3. 3.

    reaches every point in the range in minimal time (see Theorem 3.9).

Abstraction of semiflow An established basic concept in the study of evolution problems with unique solutions (corresponding to given initial data) is that of a semiflow. A semiflow on a metric space \(\mathscr {S}\) is a family of continuous mappings \(S(t): \mathscr {S}\rightarrow \mathscr {S}, \ t\ge 0,\) for which the semigroup properties

$$\begin{aligned} S(0)x = x, \quad S(t+s)x= S(t)S(s)x \quad \quad (x\in \mathscr {S}, \ s,t\ge 0) \end{aligned}$$

hold; \(t\mapsto S(t)x\) is identified with the unique solution \(u:[0, +\infty ) \rightarrow \mathscr {S}\) with initial value \(u(0) = x\).Footnote 1

Diverse methods are known to abstract dynamical systems, allowing for nonuniqueness of solutions.

One method is to define S(t) as a set-valued mapping and to interpret \(S(\cdot )x\) as the collection of all the solutions \(u: [0, +\infty ) \rightarrow \mathscr {S}\) with initial value \(u(0) = x\) (multivalued semiflow, e.g. [3, 4, 19]). Another method is to consider a semiflow \(S(\cdot )\) defined on the space of maps \(u: [0, +\infty ) \rightarrow \mathscr {S}\) (not on the phase space \(\mathscr {S}\)), by \(S(t)u = u^t\), where \(u^t(\cdot ):= u(\cdot + t)\) [24]. A third method [5] is to take the solutions themselves as objects of study and generalize the concept of semiflow on the basis that a semiflow \(S(\cdot )\) can be equivalently defined as the family of maps \(u:[0, +\infty ) \rightarrow \mathscr {S}, \ u(t) = S(t)u(0)\).

Definition 1.1

(Ball [5]) A generalized semiflow \(\mathscr {U}\) on \(\mathscr {S}\) is a family of maps \(u: [0, +\infty ) \rightarrow \mathscr {S}\) (called solutions) satisfying the hypotheses

  1. (G1)

    Existence: For each \(u_0\in \mathscr {S}\) there exists at least one \(u\in \mathscr {U}\) with \(u(0) = u_0\).

  2. (G2)

    Translates of solutions are solutions: If \(u\in \mathscr {U}\) and \(\tau \ge 0\), then the map \(u^\tau (t) := u(t+\tau ), \ t\in [0, +\infty ),\) belongs to \(\mathscr {U}\).

  3. (G3)

    Concatenation: If \(u, v \in \mathscr {U}, \ \bar{t}\ge 0\), with \(v(0) = u(\bar{t})\), then \(w\in \mathscr {U}\), where \(w(t) := u(t)\) for \(0\le t\le \bar{t}\) and \(w(t) := v(t-\bar{t})\) for \(t > \bar{t}\).

  4. (G4)

    Upper-semicontinuity with respect to initial data: If \(u_j\in \mathscr {U}\) with \(u_j(0) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} x\), then there exist a subsequence \(u_{j_k}\) of \(u_j\) and \(u\in \mathscr {U}\) with \(u(0) = x\) such that \(u_{j_k}(t) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} u(t)\) for each \(t\ge 0\).

If in addition the hypothesis (S) is satisfied, then \(\mathscr {U}\) is a semiflow:

  1. S

    For each \(u_0\in \mathscr {S}\) there is exactly one \(u\in \mathscr {U}\) with \(u(0) = u_0\).

Generalized \(\Lambda\)-semiflow The concept of generalized \(\Lambda\)-semiflow introduced in this paper is an abstraction of semiflows with nonperiodic solutions, where nonuniqueness phenomena may occur (see Sect. 3.1).Footnote 2

As in [5], a semiflow is defined as a family of maps \(u: [0, +\infty ) \rightarrow \mathscr {S}\) satisfying the hypotheses (G1)–(G4) and (S). The solutions themselves are taken as objects of study. However, in the study of minimal solutions, the dynamics between solutions sharing the same range are of interest, rather than the limit behaviour (G4) of solutions possibly having different ranges. The definition of generalized \(\Lambda\)-semiflow mirrors this aspect.

A generalized \(\Lambda\)-semiflow on \(\mathscr {S}\) is defined to be a nonempty family of maps \(u: [0, +\infty ) \rightarrow \mathscr {S}\) (called solutions) satisfying hypotheses relating to

(\({\Lambda }1\)):

Time translation: time translates of solutions are solutions,

(\({\Lambda }2\)):

Concatenation: the concatenation of two solutions yield a solution,

(\({\Lambda }3\)):

Nonperiodicity: if \(u(s) = u(t)\), then u constant in [st],

(\({\Lambda }4\)):

Extension: there is a sufficient condition (to be given in Definition 3.1) for the situation that \(\lim _{t\uparrow +\infty } u(t)\) exists and \(\mathscr {R}[u]\cup \{\lim _{t\uparrow +\infty }u(t)\}\) is the range of a solution,

(\({\Lambda } 5\)):

‘Local’ character: solutions are characterized by their behaviour in finite time intervals

(see Definition 3.1). We will focus on generalized \(\Lambda\)-semiflows with sequentially continuous solutions.

We note that

  • There is a connection between the concept of generalized \(\Lambda\)-semiflow and Ball’s concept of generalized semiflow (see below, minimal solutions to generalized semiflows);

  • The hypotheses constituting a generalized \(\Lambda\)-semiflow are mild enough to allow of applications of the theory of minimal solutions to cases beyond the scope of generalized semiflows (see below, minimal solutions to gradient flows in metric spaces).

Minimal solutions to generalized semiflows Every generalized semiflow with nonperiodic continuous solutions is a generalized \(\Lambda\)-semiflow and all our results 1, 2, 3 relating to existence, uniqueness and characteristics of minimal solutions are applicable (see Theorems 4.2 and 4.4).

Minimal solutions to gradient flows in metric spaces A p-gradient flow on a metric space \(\mathscr {S}\) [2], (\({\text {for } p\in (1,+\infty ) \text { with conjugate exponent } q}\)) generated by a functional \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) and its strong upper gradient \(g: \mathscr {S}\rightarrow [0, +\infty ]\), is described by the energy dissipation inequality

$$\begin{aligned} \phi (u(s)) - \phi (u(t)) \ \ge \ \frac{1}{q}\int _s^t{g^q(u(r)) \ dr} \ + \ \frac{1}{p}\int _s^t{|u'|^p(r) \ dr} \end{aligned}$$

for all \(0\le s\le t\); the solutions \(u: [0, +\infty ) \rightarrow \mathscr {S}\) are referred to as p-curves of maximal slope for \(\phi\) w.r.t. g (see definitions in Sect. 5.1).

If \(\phi\) and g are lower semicontinuous and \(\phi\) \({\text { has a lower bound of order }p}\), then the corresponding p-gradient flow is a generalized \(\Lambda\)-semiflow and all our results 1, 2, 3 relating to existence, uniqueness and characteristics of minimal solutions are applicable (see Theorems 5.8 and 5.9).

It is true that our assumptions do not suffice to guarantee a priori the existence of curves of maximal slope but if solutions exist, our concept of minimal solutions can be applied.

Further, a special quality of minimal solutions to a gradient flow can be proved: a curve of maximal slope is a minimal solution if and only if it crosses the 0 level set of the strong upper gradient g in an \(\mathscr {L}^1\)-negligible set of times (before it possibly becomes eventually constant) (see Proposition 5.11).

We note that, under our assumptions, the gradient flow is a generalized \(\Lambda\)-semiflow but does not fit into the concept of generalized semiflow; additional assumptions such as the relative compactness of the sublevels of \(\phi\) (which entails that \(\phi\) is bounded from below by a constant) and a conditional continuity assumption would be needed in order to prove the upper-semicontinuity hypothesis (G4) in the Definition 1.1 of generalized semiflow (cf. [23] where the theory of generalized semiflow [5] is used to prove the existence of the global attractor for a gradient flow).

Further results If there exists a function \(\Psi : \mathscr {S}\rightarrow \mathbb {R}\) which decreases along solution curves, a characterization of minimal solutions in terms of \(\Psi\) is also provided (see Proposition 4.6). Time translation and concatenation of minimal solutions yield minimal solutions (see Proposition 3.11).

Thematic classification It is noteworthy that the set of critical points of an energy functional with respect to its upper gradient which is of particular interest in the context of a minimal gradient flow (see above) plays an important role in the analysis of the asymptotic behaviour of a gradient flow, too, as it contains, under suitable assumptions, the \(\omega\)-limit sets of all the solutions (cf. [7, 23]). Also Ball’s concept of a generalized semiflow was originally aimed at proving the existence of an associated global attractor and studying its features (cf. [5, 6]).

The concept of minimal solutions and the concept of \(\omega\)-limit sets and global attractors, however, stand alone as different structural features of the solution set. The idea behind the concept of minimal solutions is to structure the collection of all the solutions to an evolution problem according to their behaviour in all finite time intervals rather than their asymptotic behaviour.

The results of this paper relating to existence and uniqueness of minimal solutions, their features and the reparametrization technique in order to generate all other solutions, represent new information on the dynamics of solutions. They uncover a uniqueness phenomenon and an order of solutions to a wide variety of evolution problems having possibly infinitely many solutions corresponding to given initial data, thus suggesting a new angle for the dynamics of solutions; in fact, the ranges of solutions which are subsets of the phase space contain, together with the associated unique minimal solutions, all the relevant data for the evolution of the generalized \(\Lambda\)-semiflow in all finite time intervals. This insight into a generalized \(\Lambda\)-semiflow provides an abstraction of the principle that initial data determine, together with the associated unique solutions, the evolution of a semiflow in all finite time intervals.

This paper’s purpose is to introduce the abstract mathematical theory of minimal solutions to generalized \(\Lambda\)-semiflows whereas a future challenge will be to set it in the biological, physical... context of a concrete evolution problem.

Minimal solutions and minimizing movements [13] The well-known concept of minimizing movements was introduced by Ennio De Giorgi [8] as “natural meeting point” of many evolution problems from different research fields in mathematics (see [2, 12] for a thorough investigation into minimizing movement schemes for gradient flows in metric spaces). The paper [13] relates minimal solutions and minimizing movements. Therein, it is proved that if \(\mathbb {H}\) is a finite-dimensional Hilbert space and \(\phi : \mathbb {H}\rightarrow \mathbb {R}\) is a continuously differentiable function, quadratically bounded from below (e.g. Lipschitz), then for every solution u to the gradient flow equation

$$\begin{aligned} u'(t) \ = \ -\nabla \phi (u(t)), \ t\ge 0, \end{aligned}$$

there exist perturbations \(\phi _\tau : \mathbb {H}\rightarrow \mathbb {R} \ (\tau >0)\) converging to \(\phi\) in the Lipschitz norm so that the following holds good: all the discrete solutions \(U_\tau : [0, +\infty ) \rightarrow \mathbb {H}\),

$$\begin{aligned} U_\tau (t) := U_\tau ^n \text { if } t\in ((n-1)\tau, n\tau ], \ n\in \mathbb {N}, \ U_\tau (0):=u(0), \end{aligned}$$

to the minimizing movement scheme

$$\begin{aligned} U_\tau ^n \text { is a minimizer for } \phi _\tau (\cdot ) + \frac{1}{2\tau }|\cdot -U_\tau ^{n-1}|^2, \ \ \ U_\tau ^0 := u(0), \end{aligned}$$

will converge to u as \(\tau \downarrow 0\). This finally proves a conjecture raised by Ennio De Giorgi [8] at the beginning of the 90’s, deepening our understanding of a gradient flow as a minimizing motion. If u is a minimal solution, such reverse approximation as minimizing movement can be directly constructed by selecting suitable compact subsets \(\mathcal {U}_\tau \subset \mathscr {R}[u]\) and coefficients \(\lambda _\tau \downarrow 0\) and setting \(\phi _\tau (\cdot ):=\phi (\cdot ) + \lambda _\tau \min _{y\in \mathcal {U}_\tau }|\cdot -y|\) (in this case, the assumption that \(\mathbb {H}\) has finite dimension can be dropped, see [13, Sect. 4]). An approximation argument [13, Sects. 5 and 6] then leads to the general statement true for all solutions.

Plan of the paper In Sect. 3, we give the precise definitions of generalized \(\Lambda\)-semiflow, explaining our hypotheses and the link to the classical notion of semiflow, and of minimal solutions, and we prove results relating to existence, uniqueness and characteristics of minimal solutions, within the abstract framework of generalized \(\Lambda\)-semiflow. In Sects. 4 and 5, we apply our concept of minimal solutions to generalized semiflows (Sect. 4) and to gradient flows in metric spaces (Sect. 5).

2 Notation

The phase space \(\mathscr {S}\) is endowed with a Hausdorff topology and \(x_j{\mathop {\rightarrow }\limits ^{\mathscr {S}}} x\) denotes the corresponding convergence of sequences.

The range of a curve \(u: [0, +\infty ) \rightarrow \mathscr {S}\) is denoted by

$$\begin{aligned} \mathscr {R}[u]:= u([0, +\infty )), \end{aligned}$$

its union with what is usually referred to as \(\omega\)-limit set in the literature by

$$\begin{aligned} \overline{\mathscr {R}[u]} := \mathscr {R}[u]\cup \{w_\star \in \mathscr {U}\ | \ \exists t_n \rightarrow +\infty, \ u(t_n) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star \}, \end{aligned}$$

and we set

$$\begin{aligned} T_\star (u) := \inf \{s\ge 0 \ | \ u(t) = u(s) \text { for all } t\ge s\} \in [0, +\infty ]. \end{aligned}$$

We say that the limit \(\lim _{t\uparrow \nu }u(t)=:w_\star \in \mathscr {S}\) exists for \(\nu \in (0, +\infty ]\) iff \(u(t_n){\mathop {\rightarrow }\limits ^{\mathscr {S}}}w_\star\) for every sequence of times \(t_n\uparrow \nu\).

3 Generalized \(\Lambda\)-semiflow, minimal solutions

We develop an abstract framework for our analysis of evolution problems for which there may be more than one solution sharing the same range. In this context, we define generalized \(\Lambda\)-semiflows, generalizing the notion of semiflows with Lyapunov function to a certain extent adapted for our considerations.

3.1 Definition of generalized \(\Lambda\)-semiflow

Definition 3.1

A generalized \(\Lambda\)-semiflow \(\mathscr {U}\) on \(\mathscr {S}\) is a nonempty family of maps \(u: [0, +\infty ) \rightarrow \mathscr {S}\) satisfying the hypotheses:

  1. (H1)

    For every \(u\in \mathscr {U}\) and \(\tau \ge 0\), the map \(u^{\tau }(t) := u(t + \tau ), \ t\in [0, +\infty )\), belongs to \(\mathscr {U}\).

  2. (H2)

    Whenever \(u, v \in \mathscr {U}\) with \(v(0) = u(\bar{t})\) for some \(\bar{t} \ge 0\), then the map \(w: [0, +\infty ) \rightarrow \mathscr {S}\), defined by \(w(t):=u(t)\) if \(t\le \bar{t}\) and \(w(t):=v(t-\bar{t})\) if \(t > \bar{t}\), belongs to \(\mathscr {U}\).

  3. (H3)

    Whenever \(u, v \in \mathscr {U}\) with \(v([s, t])\subset \mathscr {R}[u]\) for some \(t > s \ge 0\), then for every \(l_1, l_2 \in [s,t]\) the following holds: if \(v(l_1)=u(r_1)\) and \(v(l_2)=u(r_2)\) with \(u(r_1)\ne u(r_2)\) and \(r_1 < r_2\), then \(l_1 < l_2\).

  4. (H4)

    If \(u\in \mathscr {U}\) and there exists a map \(w: [0, \theta ) \rightarrow \mathscr {S}\) with \(\theta < +\infty\) such that \(w|_{[0, T]}\) can be extended to a map in \(\mathscr {U}\) for every \(T\in [0,\theta )\), and \(w([0, \theta )) = \mathscr {R}[u]\), then the limit \(\lim _{t\uparrow +\infty }u(t)=:w_\star \in \mathscr {S}\) exists and the map \(\bar{w}: [0, +\infty ) \rightarrow \mathscr {S}\), defined by

    $$\begin{aligned} \bar{w}(t) := {\left\{ \begin{array}{ll} w(t) &{}\text {if } t < \theta \\ w_\star &{}\text {if } t\ge \theta \end{array}\right. } \end{aligned}$$

    belongs to \(\mathscr {U}\).

  5. (H5)

    If a map \(w: [0, +\infty ) \rightarrow \mathscr {S}\) has the property that \(w|_{[0, T]}\) can be extended to a map in \(\mathscr {U}\) for every \(T >0\), then \(w\in \mathscr {U}\).

The elements \(u\in \mathscr {U}\) are referred to as solutions.

The hypotheses (H1) and (H2) say that time translates of solutions are solutions and that the concatenation of two solutions yield a solution. It appears that both axioms arise quite naturally in generalizations of semiflow theory including nonuniqueness phenomena (cf. [5] and Definition 1.1).

The meaning of hypothesis (H3) is that there is only one proper direction to run through the range of a solution. Typical examples (as given in this paper) are situations involving an energy decreasing along solution curves and which is constant along a solution only if the solution is constant. As a consequence of (H3) (by choosing \(u=v\)) we also obtain

$$\begin{aligned} u(s) = u(t) \quad \text {if and only if} \quad u(r) = u(s) \text { for all } r\in [s, t] \end{aligned}$$
(3.1)

for all \(u\in \mathscr {U}\) and \(0\le s< t < +\infty\).

Remark 3.2

Hypothesis (H3) may be replaced by (3.1) in Definition 3.1.

Indeed, if the translation and concatenation hypotheses (H1) and (H2) hold good and all \(u\in \mathscr {U}\) satisfy (3.1), then (H3) follows by a contradiction argument: suppose that there exist \(u, v \in \mathscr {U}\) and \(r_1< r_2, \ l_2 < l_1\) such that \(v(l_1) = u(r_1) \ne u(r_2) = v(l_2)\), and construct the map \(w: [0, +\infty ) \rightarrow \mathscr {S}\),

$$\begin{aligned} w(t) := {\left\{ \begin{array}{ll} u(t) &{}\text { if } t\le r_2 \\ v(t + l_2 - r_2) &{}\text { if } t > r_2 \end{array}\right. } \end{aligned}$$

which belongs to \(\mathscr {U}\) by (H1) and (H2). Then \(w(r_1) = w(r_2 + l_1 - l_2)\), but \(w(r_2) \ne w(r_1)\) and \(r_1< r_2 < r_2 + l_1 - l_2\), in contradiction to (3.1).

Conversely, (H3) implies (3.1), as already mentioned.

The extension property expressed in hypothesis (H4) excludes degenerate cases corresponding to the rate at which the range of a solution is described. We give an example of such degenerate case which should be excluded.

Example 3.3

Let \(\mathscr {S}= \mathbb {R}\) and \(\mathscr {U}\) be the family of all continuous maps \(u: [0, +\infty ) \rightarrow \mathbb {R}\) satisfying \(u(0) > 0\) and

$$\begin{aligned} u'(t) = u(t)^2 \text { if } t\in (S_i, T_i), \ i\in \mathbb {N}, \quad u'(t) = u(t) \text { if } t\notin \bigcup _{i\in \mathbb {N}}^{}{[S_i, T_i]} \end{aligned}$$

for some \(S_{i+1} \ge T_i \ge S_i \ge 0\) with \(\{S_i, \ T_i \ | \ i\in \mathbb {N}\}\cap [0, T]\) finite set for every \(T > 0\). Then obviously \(\mathscr {U}\) is nonempty and the hypotheses (H1)–(H3) and (H5) hold good but choosing \(w: [0, 1) \rightarrow \mathbb {R}, \ w(t) := \frac{1}{1-t}\), we see that \(\mathscr {U}\) does not satisfy (H4).

Hypothesis (H5) reflects the ‘local character’ of \(\mathscr {U}\). The following example provides a classic case of a nonlocal characterization being tantamount to some arbitrariness which we intend to exclude by hypothesis (H5).

Example 3.4

Let \(\mathscr {S}= \mathbb {R}^2\) and \(\mathscr {U}\) be the family of all continuous maps \(u: [0, +\infty ) \rightarrow \mathbb {R}^2, \ u(t) = (u_1(t), u_2(t))\) such that \(u_1(0) > 0\), \(u_2\) is strictly increasing and

$$\begin{aligned} u_1'(t) = u_1(t) \text { for all } t> 0, \quad \exists T \ge 0: \ u_2(t) = u_2(T) + t - T \text { for all } t > T. \end{aligned}$$

Then it is easy to check that \(\mathscr {U}\) is nonempty and satisfies (H1)–(H4) but \(\mathscr {U}\) does not satisfy (H5). In this case, any strictly increasing continuous map \(g: [0, +\infty ) \rightarrow \mathbb {R}\) which does not eventually become linear will yield a counterexample to (H5).

Let us explain to what extent our notion of generalized \(\Lambda\)-semiflow is an abstraction of the classical semiflow theory.

We observe that any semiflow \(\mathscr {U}\) whose members satisfy (3.1) is a generalized \(\Lambda\)-semiflow. This follows from the time translation and uniqueness property (corresponding to given initial data) of a semiflow ((G2) and (S)). It is straightforward to check (H1)–(H3) in this case. Choosing \(u\in \mathscr {U}\) and \(w: [0, \theta ) \rightarrow \mathscr {S}\) as in (H4), we obtain \(u|_{[0, \theta )} = w|_{[0, \theta )}\) by (S) (since \(u(0) = w(0)\) by (H3)) so that \(w([0, \theta )) = \mathscr {R}[u]\) and (3.1) yield u constant in \([\theta, +\infty )\). This proves (H4). Finally, (H5) follows from (S).

On the other hand, if a member \(u: [0, +\infty ) \rightarrow \mathscr {S}\) of a semiflow does not satisfy (3.1), then there is necessarily a time \(T > 0\) such that u is periodic and nonconstant on \([T, +\infty )\). Indeed, if there exist \(0\le s< \bar{r}< t < +\infty\) such that \(u(s) = u(t)\) but \(u(\bar{r})\ne u(s)\), then (G2) and (S) imply \(u(r + s) = u(r + t)\) for all \(r\ge 0\) which is equivalent to

$$\begin{aligned} u(r + t - s) = u(r) \text { for all } r\ge s, \quad u(\bar{r} + j(t-s))\ne u(s + j(t-s)) \text { for all } j\in \mathbb {N}. \end{aligned}$$

The hypotheses (H3) and (H4) do not hold good in this case. We illustrate this situation excluded in Definition 3.1 with an example.

Example 3.5

Let \(\mathscr {S}= \mathbb {R}^2\) and consider

$$\begin{aligned} \mathscr {U}:= \{u: [0, +\infty ) \rightarrow \mathbb {R}^2 \ | \ u(\cdot ) \equiv r(\cos (\cdot + \tau ), \ \sin (\cdot + \tau )), \quad \tau \in [0, 2\pi ), \ r\ge 0 \}. \end{aligned}$$

Clearly, \(\mathscr {U}\) is a semiflow on \(\mathbb {R}^2\), but the hypotheses (H3) and (H4) are not satisfied and \(\mathscr {U}\) is not a generalized \(\Lambda\)-semiflow.

A connection between generalized \(\Lambda\)-semiflows and the established theory of generalized semiflows introduced by Ball [5] is made in Sect. 4. We will see that any generalized semiflow whose members satisfy (3.1) satisfies the hypotheses (H1)–(H3), (H5) and a slightly weaker variation on (H4). If, in addition, all the solutions are continuous, then it satisfies (H4), too.

We note that a generalized \(\Lambda\)-semiflow \(\mathscr {U}\) on \(\mathscr {S}\) is nonempty, but there may be initial data \(x\in \mathscr {S}\) for which there exists no \(u\in \mathscr {U}\) with \(u(0) = z\). Also, nothing is said about the behaviour of a sequence \((u_j)\) in \(\mathscr {U}\) with converging initial data \(u_j(0)\).

Gradient flows in metric spaces fit very well in the concept of generalized \(\Lambda\)-semiflows. This aspect is examined in Sect. 5.

3.2 A partial order between solutions

Let a generalized \(\Lambda\)-semiflow \(\mathscr {U}\) on \(\mathscr {S}\) be given. We introduce a particular class of solutions (which we call minimal solutions), arising naturally from a partial order in \(\mathscr {U}\):

Definition 3.6

If \(u,v\in \mathscr {U}\) we say that \(u\succ v\) if \(\mathscr {R}[v]\subset \overline{\mathscr {R}[u]}\) and there exists an increasing 1-Lipschitz map \({\mathsf {z}}:[0,+\infty )\rightarrow [0,+\infty )\) with \({\mathsf {z}}(0)=0\) such that

$$\begin{aligned} u(t)=v({\mathsf {z}}(t))\quad \text {for every }t\ge 0. \end{aligned}$$
(3.2)

An element \(u\in \mathscr {U}\) is minimal if for every \(v\in \mathscr {U}\), \(u\succ v\) yields \(u=v\); and \(\mathscr {U}_{\text {min}}\) denotes the collection of all the minimal solutions.

Let us make a few comments on Definition 3.6.

  1. (i)

    A map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) is increasing and 1-Lipschitz if and only if

    $$\begin{aligned} 0\le {\mathsf {z}}(t)-{\mathsf {z}}(s)\le t-s \quad \text {for every }0\le s\le t. \end{aligned}$$
    (3.3)
  2. (ii)

    It is not difficult to see that \(\succ\) forms indeed a partial order in \(\mathscr {U}\) ([13], Remark 3.3).

  3. (iii)

    Condition (3.2) implies the range inclusion \(\mathscr {R}[u]\subset \mathscr {R}[v]\).

  4. (iv)

    The condition on the range \(\mathscr {R}[v]\subset \overline{\mathscr {R}[u]}\) gives control over the long-time behaviour of a possible minimal solution. Its effect as a selection criterion is illustrated in [13, Remark 3.2] with a one-dimensional example of a gradient flow.

It is not clear a priori if minimal solutions exist at all. Some kind of compactness property of \(\mathscr {U}\) appears necessary in order to guarantee the existence of minimal solutions. Let us consider our main tools concerning compactness for the existence proof given in Sect. 3.3.

We introduce the class of truncated solutions

$$\begin{aligned} \mathscr {T}[\mathscr {U}] := \{v: [0, +\infty ) \rightarrow \mathscr {S}\ | \ v(t) = u(t\wedge T) \quad \text {for some } u\in \mathscr {U}, \ T\in [0, +\infty ]\} \end{aligned}$$

and we define the map \(\rho : \mathscr {T}[\mathscr {U}] \rightarrow [0, +\infty ]\) as

$$\begin{aligned} \rho (v) := \inf \{s\ge 0 \ | \ v(t) = v(s) \quad \text {for every } t\ge s\}, \ v\in \mathscr {T}[\mathscr {U}]. \end{aligned}$$
(3.4)

The following compactness hypothesis will turn out to be appropriate for our purposes:

  1. (C)

    If a sequence \(v_n\in \mathscr {T}[\mathscr {U}], \ n\in \mathbb {N},\) satisfies \(\sup _n{\rho (v_n)} < +\infty\) and \(\mathscr {R}[v_n] = \mathscr {R}[v_1]\) for all \(n \in \mathbb {N}\), then there exists \(v\in \mathscr {T}[\mathscr {U}]\) and a subsequence \(n_k\uparrow +\infty\) such that

    $$\begin{aligned} v_{n_k}(t){\mathop {\rightarrow }\limits ^{\mathscr {S}}} v(t) \quad \text {for all } t\in [0, +\infty ), \quad \mathscr {R}[v] = \mathscr {R}[v_1]. \end{aligned}$$

We note that in the above situation it holds that

$$\begin{aligned} \rho (v) \ \le \ \liminf _{k\rightarrow +\infty } \rho (v_{n_k}) \end{aligned}$$
(3.5)

since \(\rho\) is lower semicontinuous with respect to pointwise convergence.

Now, we have all the ingredients to prove the existence of minimal solutions. Our construction will be based on a step-by-step procedure of truncating a given trajectory and each time minimizing \(\rho\) with respect to the truncated range.

3.3 Existence and characteristics of minimal solutions

Existence and uniqueness of minimal solutions corresponding to given ranges is proved under the additional compactness hypothesis (C).

It is shown that among solutions sharing the same range, the minimal solution induces all the other ones by time reparametrization (3.2) and it reaches any point in the range in minimal time.

Definition of \(\mathscr {U}\,[\mathscr {R}]\)

For a generalized \(\Lambda\)-semiflow \(\mathscr {U}\) and the range \(\mathscr {R}= \mathscr {R}[y] \subset \mathscr {S}\) of a solution \(y\in \mathscr {U}\), we define \(\mathscr {U}\,[\mathscr {R}]\) as the collection of all the solutions \(w\in \mathscr {U}\) with \(\mathscr {R}\subset \mathscr {R}[w]\subset \overline{\mathscr {R}} := \mathscr {R}\cup \{w_\star \in \mathscr {S}\ | \ \exists t_n\rightarrow +\infty, \ y(t_n){\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star \}\) and

$$\begin{aligned} w([0, \theta )) = \mathscr {R}\quad \text {and} \quad w([\theta, +\infty )) \subset \overline{\mathscr {R}}\setminus \mathscr {R}\quad \text {for some } \theta \in (0, +\infty ]. \end{aligned}$$
(3.6)

We note that the set \(\overline{\mathscr {R}}\) is indeed independent of the choice \(y\in \mathscr {U}\) with \(\mathscr {R}[y] = \mathscr {R}\):

Lemma 3.7

Whenever \(y, \tilde{y}\in \mathscr {U}, \ \mathscr {R}[y] = \mathscr {R}[\tilde{y}]\) and \(w_\star \in \mathscr {S}\), it holds that

$$\begin{aligned} \exists t_n\rightarrow +\infty, \ y(t_n){\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star \quad \text {if and only if} \quad \exists s_n\rightarrow +\infty, \ \tilde{y}(s_n){\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star, \end{aligned}$$
(3.7)

i.e. it holds that \(\overline{\mathscr {R}[y]} = {\overline{\mathscr {R}}} = \overline{\mathscr {R}[\tilde{y}]}\).

Proof

If \(t_n\rightarrow +\infty, \ y(t_n) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star\), then there is a sequence of times \((s_n)\) with \(\tilde{y}(s_n) = y(t_n)\), and by (H3), we may assume that \((s_n)\) is increasing. Let \(S:= \sup _n s_n\). If \(S = +\infty\) or \(T_\star (y) < +\infty\), nothing remains to be shown. If \(S < +\infty\) and \(T_\star (y)=+\infty\), then we obtain \(\tilde{y}([0, S)) = \mathscr {R}[y] = \mathscr {R}[\tilde{y}]\) by (H3), and thus by (3.1) there exists \(\delta > 0\) such that \(\tilde{y}\) is constant in \((S-\delta, +\infty ]\), in contradiction to \(T_\star (y) = +\infty\). This proves (3.7). \(\square\)

Let us take a close look at the case of finite \(\theta\) in (3.6). If there exists a solution \(w\in \mathscr {U}\,[\mathscr {R}]\) with \(w([0, \theta )) = \mathscr {R}\) and \(\theta < +\infty\), we may apply (H4) and obtain that the limit \(\lim _{t\uparrow +\infty } y(t) =: w_\star \in \mathscr {S}\) is well-defined and that \(w(t) = w_\star\) for all \(t\ge \theta\). We notice that \({\overline{\mathscr {R}}}\) then takes the form \({\overline{\mathscr {R}}} = \mathscr {R}\cup \{w_\star \}\). In this case, \({\overline{\mathscr {R}}} = \mathscr {R}[w] = \overline{{\overline{\mathscr {R}}}}\) and \(\mathscr {U}\,[{\overline{\mathscr {R}}}] \subset \mathscr {U}\,[\mathscr {R}]\).

The following observation which is a direct consequence of Definition 3.6 and (3.1) may be seen as motivation behind considering \(\mathscr {U}\,[\mathscr {R}]\).

Lemma 3.8

For \(y, w\in \mathscr {U}\), the implication

$$\begin{aligned} y\succ w \quad \Rightarrow \quad w\in \mathscr {U}\,[\mathscr {R}[y]] \end{aligned}$$
(3.8)

holds good.

Proof

If \(y\succ w\), then by definition, \(\mathscr {R}[w]\subset \overline{\mathscr {R}[y]}\) and there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) with \({\mathsf {z}}(0) = 0\) such that \(y(t) = w({\mathsf {z}}(t))\) for all \(t\ge 0\). Choose \(\theta := \sup _{t\ge 0} {\mathsf {z}}(t) \in [0, +\infty ]\). If \(s\in [0, \theta )\), then there exists \(t\ge 0\) such that \({\mathsf {z}}(t) = s\) and thus \(w(s) \in \mathscr {R}[y]\). If \(\theta = +\infty\), then \(\mathscr {R}[w] = \mathscr {R}[y]\). The same holds if \(\theta < +\infty, \ {\mathsf {z}}(\bar{t}) = \theta\) for some \(\bar{t}\ge 0\). Finally, we consider the case \(\theta< +\infty, \ {\mathsf {z}}(t) < \theta\) for all \(t\ge 0\). It holds that \(w([0, \theta )) = \mathscr {R}[y]\) and \(w([\theta, +\infty ) \subset \overline{\mathscr {R}[y]}\). If \(w(s) \in \mathscr {R}[y]\) for some \(s\ge \theta\), then there exists \(\tilde{s}\in [0, \theta )\) such that \(w(s) = w(\tilde{s})\), and by (3.1), w is constant in \([\tilde{s}, s]\), hence \(T_\star (y) < +\infty\) and \(\mathscr {R}[w] = \mathscr {R}[y]\). The proof of (3.8) is complete. \(\square\)

Now, our theorem reads as follows.

Theorem 3.9

Let \(\mathscr {U}\) be a generalized \(\Lambda\)-semiflow on \(\mathscr {S}\) satisfying the compactness hypothesis (C). Suppose that every solution \(u\in \mathscr {U}\) is sequentially continuous, i.e.

$$\begin{aligned} u(t_j) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} u(t) \quad \text {whenever } t_j \rightarrow t, \ t_j, t \in [0, +\infty ). \end{aligned}$$
(3.9)

Then the following statements hold good:

  1. (1)

    For every \(\mathscr {R}= \mathscr {R}[y] \subset \mathscr {S}\) which is the range of a solution \(y\in \mathscr {U}\) there exists a unique minimal solution \(u\in \mathscr {U}\,[\mathscr {R}]\cap \mathscr {U}_{\text {min}}\).

    Moreover, if \(v\in \mathscr {U}\,[\mathscr {R}]\), then \(v\succ u\).

  2. (2)

    Every minimal solution \(u\in \mathscr {U}_{\text {min}}\) is injective in \([0, T_\star (u))\).

  3. (3)

    Whenever \(u\in \mathscr {U}_{\text {min}}, \ v\in \mathscr {U}\) with \(u\in \mathscr {U}\,[\mathscr {R}[v]]\) and \(u(t_0) = v(t_1)\) for some \(t_0, t_1\in [0, +\infty )\), then \(t_0\wedge T_\star (u) \le t_1\).

  4. (4)

    Whenever \(u\in \mathscr {U}_{\text {min}}, \ v\in \mathscr {U}\) with \(v([s_1, t_1]) = u([s_0, t_0])\) for some \(t_i\ge s_i\ge 0 \ (i=0,1)\), then the inequality

    $$\begin{aligned} t_0\wedge T_\star (u) - s_0 \ \le \ t_1 - s_1 \end{aligned}$$

    necessarily holds.

  5. (5)

    A solution \(u\in \mathscr {U}\) belongs to \(\mathscr {U}_{\text {min}}\) if for every \(v\in \mathscr {U}\,[\mathscr {R}[u]]\) the following implication holds: whenever \(u(t_0) = v(t_1)\) for some \(t_0, t_1 \in [0, +\infty )\), then \(t_0\wedge T_\star (u) \le t_1\).

Proof

  1. (1).

    Let \(\mathscr {R}= \mathscr {R}[y] \subset \mathscr {S}\) be the range of a solution \(y\in \mathscr {U}\). We distinguish between two cases: \(T_\star (y) = +\infty\) and \(T_\star (y) < +\infty\). In the first case, we select an increasing sequence of times \(T_n \uparrow +\infty\) with \(y(T_n) \ne y(T_{n+1})\) for all \(n\in \mathbb {N}\). Then we have

    $$\begin{aligned} y([0, T_n]) \subsetneq y([0, T_{n+1}]), \quad \bigcup _{n}^{}{y([0, T_n])} = \mathscr {R}. \end{aligned}$$

    If \(T_\star (y) < +\infty\), we may go through the following proof with just one step \(n=1\) and \(T_1 := T_\star (y)\). For every \(n \ (n\in \mathbb {N} \text { or } n=1)\), we minimize \(\rho\) (defined in (3.4)) in

    $$\begin{aligned} \mathcal {G}[\mathscr {R}_n] := \{w\in \mathscr {T}[\mathscr {U}] \ | \ \mathscr {R}[w] = \mathscr {R}_n\}, \quad \mathscr {R}_n := y([0, T_n]). \end{aligned}$$

    Since \(y(\cdot \wedge T_n) \in \mathcal {G}[\mathscr {R}_n]\) and thus \(\inf _{w\in \mathcal {G}[\mathscr {R}_n]} \rho (w) \le T_n < +\infty\), the compactness hypothesis (C) and (3.5) yield the existence of a minimizer \(u_n\in \mathcal {G}[\mathscr {R}_n]\) of \(\rho |_{\mathcal {G}[\mathscr {R}_n]}\). By (3.9), \(u_n\) is constant in \([\rho (u_n), +\infty )\). We show that \(u_n\) is the unique minimizer of \(\rho\) in \(\mathcal {G}[\mathscr {R}_n]\). Suppose that there exist \(\tilde{u}_n\in \mathcal {G}[\mathscr {R}_n], \ t_0 \ge 0\) with \(\rho (\tilde{u}_n) = \rho (u_n), \ \tilde{u}_n(t_0)\ne u_n(t_0)\). Then it follows from (H3) that \(t_0\in (0, \rho (u_n))\) and that there exists \(s_0\in [0, \rho (u_n))\), w.l.o.g. \(s_0 < t_0\), such that \(\tilde{u}_n(s_0) = u_n(t_0)\) and \(\tilde{u}_n([0, s_0]) = u_n([0, t_0])\). By (H1) and (H2), we may construct a truncated solution \(w\in \mathcal {G}[\mathscr {R}_n]\),

    $$\begin{aligned} w(r):= {\left\{ \begin{array}{ll} \tilde{u}_n(r) &{}\text { if } r\in [0, s_0] \\ u_n(r+t_0-s_0) &{}\text { if } r > s_0 \end{array}\right. } \end{aligned}$$

    satisfying \(\rho (w) \le \rho (u_n) + s_0 - t_0 < \rho (u_n)\), in contradiction to \(u_n\) minimizing \(\rho\) in \(\mathcal {G}[\mathscr {R}_n]\). So \(\rho\) admits a unique minimizer \(u_n\) in \(\mathcal {G}[\mathscr {R}_n]\). The same argument shows that \(u_n\) is injective in \([0, \rho (u_n)]\). We now set \(S_n := \rho (u_n) \le T_n\) and define \({\mathsf {z}}_n: [0, T_n] \rightarrow [0, S_n]\) as

    $$\begin{aligned} {\mathsf {z}}_n(t):=\min \Big \{s\in [0,S_n]:u_n(s)=y(t)\Big \}, \quad t\in [0, T_n]. \end{aligned}$$

    The map \({\mathsf {z}}_n\) is increasing by (H3), and \({\mathsf {z}}_n(0) = 0, \ {\mathsf {z}}_n(T_n) = S_n\). It holds that \(u_n({\mathsf {z}}_n(t)) = y(t)\) for all \(t\in [0, T_n]\). A contradiction argument shows that \({\mathsf {z}}_n\) is 1-Lipschitz. Suppose that there exist \(t_1, t_2 \in [0, T_n], \ t_1 < t_2\), such that \(\delta _t := t_2 - t_1 < {\mathsf {z}}_n(t_2) - {\mathsf {z}}_n(t_1) =: \delta _{\mathsf {z}}\). Then, let us construct the map \(w: [0, +\infty ) \rightarrow \mathscr {S}\),

    $$\begin{aligned} w(r):= {\left\{ \begin{array}{ll} u_n(r)&{}\text {if }0\le r\le {\mathsf {z}}_n(t_1),\\ y(r+t_1-{\mathsf {z}}_n(t_1))&{}\text {if }{\mathsf {z}}_n(t_1)\le r\le \delta _t+{\mathsf {z}}_n(t_1)\\ u_n(r+\delta _{\mathsf {z}}-\delta _t)&{}\text {if }r\ge \delta _t+{\mathsf {z}}_n(t_1). \end{array}\right. } \end{aligned}$$

    which belongs to \(\mathcal {G}[\mathscr {R}_n]\) by (H1)–(H3). Moreover, \(\rho (w)\le S_n - \delta _{\mathsf {z}}+ \delta _t < S_n\), a contradiction to the fact that \(u_n\) minimizes \(\rho\) in \(\mathcal {G}[\mathscr {R}_n]\). A further contradiction argument (which we omit since it is very similar to the preceding two) shows that \(S_n < S_{n+1}\) and that \(u_n(\cdot \wedge s)\) minimizes \(\rho\) in

    $$\begin{aligned} \{w\in \mathscr {T}\,[\mathscr {U}] \ | \ \mathscr {R}[w] = u_n([0, s])\} \end{aligned}$$

    if \(s\in [0, S_n]\). In particular, we obtain

    $$\begin{aligned} u_n(s) = u_{n+1}(s), \quad {\mathsf {z}}_n(t) = {\mathsf {z}}_{n+1}(t) \quad \text {for every } s\in [0, S_n], \ t\in [0, T_n]. \end{aligned}$$
    (3.10)

    Let \(S_\star := \sup _n S_n\). Due to (3.10), we may define \(u: [0, S_\star ) \rightarrow \mathscr {S}\) as

    $$\begin{aligned} u(s) := u_n(s) \quad \text {if } s\in [0, S_n], \end{aligned}$$
    (3.11)

    and \({\mathsf {z}}: [0, T_\star (y)) \rightarrow [0, S_\star )\) as

    $$\begin{aligned} {\mathsf {z}}(t) := {\mathsf {z}}_n(t) \quad \text {if } t\in [0, T_n]. \end{aligned}$$
    (3.12)

    If \(S_\star = +\infty\), then the map \(u: [0, +\infty ) \rightarrow \mathscr {S}\) belongs to \(\mathscr {U}\) by (H5). Since it holds that \(T_\star (y) = +\infty\) in this case, we obtain \(y(t) = u({\mathsf {z}}(t))\) for all \(t\ge 0\). In particular, \(y\succ u\). If \(S_\star < + \infty\) and \(T_\star (y) = +\infty\), we apply hypothesis (H4) which provides that the limit \(\lim _{t\uparrow +\infty } y(t) =: u_\star \in \mathscr {S}\) is well-defined in this case and that extending u by the constant value \(u_\star\) yields a map in \(\mathscr {U}\), i.e. \(u: [0, +\infty ) \rightarrow \mathscr {S}\) defined as

    $$\begin{aligned} u(s) := {\left\{ \begin{array}{ll} u_n(s) &{}\text { if } s\in [0, S_n] \\ u_\star &{}\text { if } s\ge S_\star \end{array}\right. } \end{aligned}$$
    (3.13)

    belongs to \(\mathscr {U}\). Again we obtain \(y(t) = u({\mathsf {z}}(t))\) for all \(t\ge 0\), and thus \(y\succ u\). The same goes for the case \(S_\star = S_1, \ T_\star (y) < +\infty\): in this case we may extend u as in (3.13) due to (H1) and (H2), and extending \({\mathsf {z}}\) by the constant value \(S_\star\), we obtain \(y\succ u\). We note that \(u\in \mathscr {U}\,[\mathscr {R}]\) and \(\mathscr {U}\,[\mathscr {R}[u]] \subset \mathscr {U}\,[\mathscr {R}]\). Suppose now that

    $$\begin{aligned} v\succ u \quad \text {for all } v\in \mathscr {U}\,[\mathscr {R}]. \end{aligned}$$
    (3.14)

    Then, due to (3.8), it follows that for every \(\bar{u}\in \mathscr {U}\), \(u \succ \bar{u}\) yields \(u = \bar{u}\). This shows that \(u\in \mathscr {U}_{\text {min}}\), and by (3.14) again, u is the unique minimal solution in \(\mathscr {U}\,[\mathscr {R}]\). So it only remains to prove (3.14): Let \(v\in \mathscr {U}\,[\mathscr {R}]\). Let \(S_n\) be as in the construction of u. For every \(S_n\), choose \(0\le \tilde{T}_n \le T_\star (v)\) such that \(v(\tilde{T}_n) = u(S_n)\). By (H3), \(v([0, \tilde{T}_n]) = u([0, S_n])\) and \((\tilde{T}_n)\) is increasing. We set \(\tilde{T}_\star := \sup _n \tilde{T}_n\le T_\star (v)\). We show that \(\tilde{T}_\star = T_\star (v)\). Suppose that \(\tilde{T}_\star < T_\star (v)\) (which implies \(S_\star \le \tilde{T}_\star < +\infty\)). Then we obtain by (H3), since \(v([0, \tilde{T}_\star )) = u([0, S_\star ))\), that there exists \(\delta > 0\) such that v is constant in \((\tilde{T}_\star - \delta, \tilde{T}_\star )\), contradicting the fact that \(v(T_n) \ne v(T_m)\) for \(n\ne m\). If there is only one step \(n=1\) in the construction of u and \(S_\star = S_1\), then we clearly have \(T_\star (v) < +\infty\) and \(\tilde{T}_\star = \tilde{T}_1 = T_\star (v)\). We define \({\tilde{{\mathsf {z}}}}: [0, T_\star (v)) \rightarrow [0, S_\star )\) as

    $$\begin{aligned} {\tilde{{\mathsf {z}}}}(t) := \min \Big \{s\in [0,S_\star ):u(s)=v(t)\Big \}, \quad t\in [0, T_\star (v)). \end{aligned}$$

    It holds that \(v(t) = u({\tilde{{\mathsf {z}}}}(t))\) for all \(t\in [0, T_\star (v))\). Following the same arguments as above for \({\mathsf {z}}\), we obtain that \(\tilde{{\mathsf {z}}}\) is increasing and 1-Lipschitz. Extending \(\tilde{{\mathsf {z}}}\) by the constant value \(S_\star \le T_\star (v)\) if \(T_\star (v) < +\infty\), we obtain \(v\succ u\). The proof of (1) is complete. The statements (2)–(5) are direct consequences of our method of constructing the minimal solutions. However, we provide independent proofs.

  2. (2).

    Let \(u\in \mathscr {U}_{\text {min}}\) and suppose that there exist \(0\le t_0< t_1 < T_\star (u)\) such that \(u(t_0) = u(t_1)\). By (3.1) it follows that \(u(r) = u(t_0)\) for all \(r\in [t_0, t_1]\). Now we define \(w: [0, +\infty ) \rightarrow \mathscr {S}\) as

    $$\begin{aligned} w(t) := {\left\{ \begin{array}{ll} u(t) &{}\text { if } 0\le t \le t_0 \\ u(t+t_1-t_0) &{}\text { if } t > t_0 \end{array}\right. } \end{aligned}$$

    which belongs to \(\mathscr {U}\) by (H1) and (H2). Choosing \({\mathsf {z}}(t):= t \wedge t_0 + (t-t_1)_+\), we see that \(u\succ w\), which yields \(w = u\) since u is minimal. This implies \(u(r) = u(r + t_1 - t_0)\) for all \(r \ge t_0\). Due to (3.1), it follows that u is constant in \([t_0, +\infty )\), in contradiction to \(t_0 < T_\star (u)\). So u is injective in \([0, T_\star (u))\).

  3. (3)

    is a special case of (4).

  4. (4).

    Let \(u\in \mathscr {U}_{\text {min}}, \ v\in \mathscr {U}\) and \(t_i\ge s_i\ge 0\) such that \(v([s_1, t_1]) = u([s_0, t_0])\). If \(T_\star (u) < +\infty\), we may assume w.l.o.g. that \(s_0 < t_0 \le T_\star (u)\). We note that \(v(s_1) = u(s_0)\) and \(v(t_1) = u(t_0)\) by (H3), and define \(w: [0, +\infty ) \rightarrow \mathscr {S}\) as

    $$\begin{aligned} w(r) := {\left\{ \begin{array}{ll} u(r) &{}\text { if } 0\le r \le s_0 \\ v(r+s_1-s_0) &{}\text { if } s_0 < r \le t_1-s_1+s_0 \\ u(r+t_0-s_0+s_1-t_1) &{}\text { if } r > t_1-s_1+s_0 \end{array}\right. } \end{aligned}$$

    which belongs to \(\mathscr {U}\) by (H1) and (H2), with \(\mathscr {R}[w] = \mathscr {R}[u]\). Due to (1), it holds that \(w\succ u\), i.e. there exists an increasing 1-Lipschitz continuous map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) such that \(u({\mathsf {z}}(t)) = w(t)\) for all \(t\in [0, +\infty )\). Since u is injective in \([0, T_\star (u))\) (see statement (2)), it follows that \({\mathsf {z}}(s_0) = s_0\) and \({\mathsf {z}}(t_1 - s_1 + s_0) \ge t_0\). So we obtain

    $$\begin{aligned} t_0 - s_0 \ \le \ {\mathsf {z}}(t_1 - s_1 + s_0) - {\mathsf {z}}(s_0) \ \le \ t_1 - s_1 \end{aligned}$$

    by the 1-Lipschitz continuity of \({\mathsf {z}}\). This proves (4).

  5. (5).

    Suppose that \(u\in \mathscr {U}\) satisfies the assumption of claim (5) and that \(u\succ v\) for some \(v\in \mathscr {U}\). Then there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) such that \(v({\mathsf {z}}(t)) = u(t)\) for all \(t\in [0, +\infty )\) and \({\mathsf {z}}(0) = 0\), hence \({\mathsf {z}}(t) \le t\) for all \(t\in [0, +\infty )\). Moreover, \(v\in \mathscr {U}\,[\mathscr {R}[u]]\) due to (3.8). By assumption of (5), it follows that \(t\le {\mathsf {z}}(t)\) for all \(t\in [0, T_\star (u))\). Taken together, this yields \({\mathsf {z}}(t) = t\) for all \(t\in [0, T_\star (u))\), and thus, \(u=v\). So we obtain that u is minimal.

The proof of Theorem 3.9 is complete. \(\square\)

Remark 3.10

In view of Definition 3.6 and (C), the sequential continuity (3.9) of the solutions appears a natural hypothesis in our concept (cf. the instances under consideration in Sects. 4 and 5).

We do not make use of the compactness hypothesis (C) and of (3.9) in the proof of the statements (2) and (5).

Time translates of minimal solutions are minimal solutions and the concatenation of two minimal solutions yield a minimal solution:

Proposition 3.11

Let \(\mathscr {U}\) be a generalized \(\Lambda\)-semiflow on \(\mathscr {S}\). Then it holds:

For every \(u\in \mathscr {U}_{\text {min}}\) and \(\tau \ge 0\), the map \(u^{\tau }(t) := u(t + \tau ), \ t\in [0, +\infty )\), belongs to \(\mathscr {U}_{\text {min}}\).

Whenever \(u, v \in \mathscr {U}_{\text {min}}\) with \(v(0) = u(\bar{t})\) for some \(\bar{t} \ge 0\) and u is sequentially continuous (3.9), then the map \(w: [0, +\infty ) \rightarrow \mathscr {S}\), defined by \(w(t):=u(t)\) if \(t\le \bar{t}_\star\) and \(w(t):=v(t-\bar{t}_\star )\) if \(t > \bar{t}_\star\), with \(\bar{t}_\star := \bar{t}\wedge T_\star (u)\), belongs to \(\mathscr {U}_{\text {min}}\).

Proof

We prove the first statement: Let \(u\in \mathscr {U}_{\text {min}}\) and \(\tau \ge 0\). Suppose that \(u^\tau \succ v\) for some \(v\in \mathscr {U}\). Then \(v\in \mathscr {U}\,[\mathscr {R}[u^\tau ]]\) and there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) such that \(u(t+\tau ) = u^\tau (t) = v({\mathsf {z}}(t))\) for all \(t\ge 0\). We define \(\tilde{v}: [0, +\infty ) \rightarrow \mathscr {S}\) as

$$\begin{aligned} \tilde{v}(t) := {\left\{ \begin{array}{ll} u(t) &{}\text { if } t\le \tau \\ v(t-\tau ) &{}\text { if } t>\tau \end{array}\right. } \end{aligned}$$

which belongs to \(\mathscr {U}\) by (H2). It holds that \(\tilde{v}\in \mathscr {U}\,[\mathscr {R}[u]]\) and choosing \(\tilde{{\mathsf {z}}}: [0, +\infty ) \rightarrow [0, +\infty )\),

$$\begin{aligned} \tilde{{\mathsf {z}}}(t) := {\left\{ \begin{array}{ll} t &{}\text { if } t\le \tau \\ {\mathsf {z}}(t-\tau ) + \tau &{}\text { if } t>\tau \end{array}\right. } \end{aligned}$$

we obtain \(u\succ \tilde{v}\). Since u is minimal, it follows that \(u = \tilde{v}\), hence \(u^{\tau } = v\) and the claim is proved.

Now, we prove the second statement: Let \(u, v\in \mathscr {U}_{\text {min}}, \ \bar{t}\ge 0\) be given, set \(\bar{t}_\star :=\bar{t}\wedge T_\star (u)\) and define \(w:[0, +\infty ) \rightarrow \mathscr {S}\) as

$$\begin{aligned} w(t) := {\left\{ \begin{array}{ll} u(t) &{}\text { if } t\le \bar{t}_\star \\ v(t-\bar{t}_\star ) &{}\text { if } t>\bar{t}_\star \end{array}\right. } \end{aligned}$$

which belongs to \(\mathscr {U}\) by (H2).

Suppose that \(w\succ y\) for some \(y\in \mathscr {U}\). Then \(y\in \mathscr {U}\,[\mathscr {R}[w]]\) and there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) such that \(w(t) = y({\mathsf {z}}(t))\) for all \(t\ge 0\). We define \(w_i: [0, +\infty ) \rightarrow \mathscr {S}\ (i=1,2)\) as

$$\begin{aligned} w_1(t) := {\left\{ \begin{array}{ll} y(t) &{}\text { if } t\le {\mathsf {z}}(\bar{t}_\star ) \\ u(t + \bar{t}_\star - {\mathsf {z}}(\bar{t}_\star )) &{}\text { if } t > {\mathsf {z}}(\bar{t}_\star ) \end{array}\right. } \quad \quad w_2(t) := y(t + {\mathsf {z}}(\bar{t}_\star )). \end{aligned}$$

Choosing \({\mathsf {z}}_i: [0, +\infty ) \rightarrow [0, +\infty ) \ (i=1,2)\),

$$\begin{aligned} {\mathsf {z}}_1(t) := {\left\{ \begin{array}{ll} {\mathsf {z}}(t) &{}\text { if } t\le \bar{t}_\star \\ t+{\mathsf {z}}(\bar{t}_\star ) - \bar{t}_\star &{}\text { if } t > \bar{t}_\star \end{array}\right. } \quad \quad {\mathsf {z}}_2(t):= {\mathsf {z}}(t + \bar{t}_\star ) - {\mathsf {z}}(\bar{t}_\star ), \end{aligned}$$

we see that \(u\succ w_1\) and \(v\succ w_2\). As uv are minimal solutions, it follows that \(u = w_1, \ v = w_2\). Hence, \(y(t) = u(t)\) for all \(t \le {\mathsf {z}}(\bar{t}_\star )\) and \(y(t) = v(t-{\mathsf {z}}(\bar{t}_\star ))\) for all \(t > {\mathsf {z}}(\bar{t}_\star )\). Due to statement (2) in Theorem 3.9, the minimal solution u is injective in \([0, T_\star (u))\). So, \(u=w_1\) implies \({\mathsf {z}}(\bar{t}_\star )= \bar{t}_\star\) and we obtain \(y = w\). The proof is complete. \(\square\)

Remark 3.12

Clearly, \(\mathscr {U}_{\text {min}}\) satisfies (H3), and with similar arguments as in the proof of Proposition 3.11, it is possible to show that \(\mathscr {U}_{\text {min}}\) satisfies (H4) and (H5), too.

The second statement of Proposition 3.11 still holds for \(0\le \bar{t} \le T_\star (u)\) if we do not assume that u is sequentially continuous.

4 Minimal solutions to generalized semiflows

We study the theory developed in Sect. 3 with regard to the concept of generalized semiflows introduced by Ball [5].

According to [5, 6], we suppose that \(\mathscr {S}\) is a metric space with metric d and we work with the topology induced by the metric, i.e.

$$\begin{aligned} x_j {\mathop {\rightarrow }\limits ^{\mathscr {S}}} x \quad :\Leftrightarrow \quad d(x_j, x) \rightarrow 0 \end{aligned}$$

for \(x_j, x\in \mathscr {S}\).

We refer the reader to Definition 1.1 for the definition of generalized semiflow. For a given generalized semiflow \(\mathscr {U}\), the following is defined in [5]:

A complete orbit is a map \(w: \mathbb {R}\rightarrow \mathscr {S}\) such that for any \(s\in \mathbb {R}\), the map \(w^s(t):= w(t+s), \ t\in [0, +\infty ),\) belongs to \(\mathscr {U}\). A complete orbit w is stationary if \(w(t) = x\) for all \(t\in \mathbb {R}\), for some \(x\in \mathscr {S}\).

Definition 4.1

[5] A function \(\psi : \mathscr {S}\rightarrow \mathbb {R}\) is called a Lyapunov function for \(\mathscr {U}\) if the following holds

  1. (L1)

    \(\psi\) is continuous,

  2. (L2)

    \(\psi (u(t)) \le \psi (u(s))\) for every \(u\in \mathscr {U}\) and \(0\le s\le t < +\infty\),

  3. (L3)

    whenever the map \(t\mapsto \psi (w(t)) \ (t\in \mathbb {R})\) is constant for some complete orbit w, then w is stationary.

Generalized semiflows with Lyapunov function and continuous solutions are discussed in [5, 6].

Minimal solutions to generalized semiflows We find that any generalized semiflow with Lyapunov function and continuous solutions is a generalized \(\Lambda\)-semiflow, i.e. satisfies the hypotheses (H1)–(H5) in Definition 3.1. Moreover, the compactness hypothesis (C) is satisfied.

We will see that the same holds good for any generalized semiflow with continuous solutions satisfying (3.1).

Also we will see that the presence of a function decreasing along solution curves allows of a further characterization of minimal solutions.

Theorem 4.2

Let \(\mathscr {U}\) be a generalized semiflow on \(\mathscr {S}\). Suppose that there exists a function \(\Psi : \mathscr {S}\rightarrow \mathbb {R}\) for \(\mathscr {U}\) satisfying (L2) and (L3) and that every solution \(u\in \mathscr {U}\) is sequentially continuous, i.e.

$$\begin{aligned} u(t_j) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} u(t) \quad \text {whenever } t_j \rightarrow t, \ t_j, t \in [0, +\infty ). \end{aligned}$$

Then \(\mathscr {U}\) is a generalized \(\Lambda\)-semiflow, according to Definition 3.1, and satisfies the compactness hypothesis (C). In particular, all the statements (1)–(5) of Theorem 3.9 hold good for \(\mathscr {U}\).

Comment on the function \(\Psi : \mathscr {S}\rightarrow \mathbb {R}\)

We suppose that there exists a function \(\Psi : \mathscr {S}\rightarrow \mathbb {R}\) for \(\mathscr {U}\) satisfying (L2) and (L3). If, in addition, \(\Psi\) is continuous, then it is called a Lyapunov function for \(\mathscr {U}\) (according to [5, 6], Definition 4.1 above).

Please note that we do not need to require continuity of \(\Psi\) in order to obtain the results of Theorem 4.2.

Proof

The existence hypothesis (G1) implies that \(\mathscr {U}\) is nonempty.

The hypotheses (H1) and (H2) correspond to (G2) and (G3). In order to prove (H3), it is now sufficient to show (3.1), due to Remark 3.2. Let \(u\in \mathscr {U}\) and \(0\le s< t < +\infty\) such that \(u(s) = u(t)\). Then it follows that \(\Psi (u(r)) = \Psi (u(s))\) for all \(r\in [s, t]\) since \(\Psi \circ u\) is decreasing. Applying (G2), (G3) and (G4), we obtain that the map \(v: \mathbb {R} \rightarrow \mathscr {S}\) defined as

$$\begin{aligned} v(r):= u(r + s - j(t-s)) \quad \text {if } r\in [j(t-s), (j+1)(t-s)], \ j\in \mathbb {Z}, \end{aligned}$$

is a complete orbit for \(\mathscr {U}\). It holds that \(\Psi (v(r)) = \Psi (u(s))\) for all \(r\in \mathbb {R}\) and we may conclude that v is stationary, i.e. \(u(r) = u(s)\) for all \(r\in [s, t]\). This proves (3.1).

Now, let us show that \(\mathscr {U}\) satisfies (H4). Let \(u\in \mathscr {U}\). Suppose that there exists a map \(w: [0, \theta ) \rightarrow \mathscr {S}\) with \(\theta < + \infty\) and \(w([0, \theta )) = \mathscr {R}[u]\) such that \(w|_{[0, T]}\) can be extended to a map in \(\mathscr {U}\) for every \(T\in [0, \theta )\). In particular, whenever \(T\in [0, \theta ), \ S\in [0, +\infty ), \ w(T) = u(S)\), the map \(w(\cdot, T, S) : [0, +\infty ) \rightarrow \mathscr {S}\) defined as

$$\begin{aligned} w(t, T, S) := {\left\{ \begin{array}{ll} w(t) &{}\text { if } 0\le t \le T \\ u(t + S - T) &{}\text { if } t > T \end{array}\right. } \end{aligned}$$

belongs to \(\mathscr {U}\). If \(T_\star (u) < +\infty\), the claim easily follows from hypothesis (H3) already proved above. If \(T_\star (u) = +\infty\), we select an increasing sequence of times \(S_n \uparrow +\infty\). Due to (H3), we find a corresponding increasing sequence \((T_n)\) with \(w(T_n) = u(S_n)\); moreover \(T_n\uparrow \theta\): indeed, if \(\sup _n T_n \le T < \theta\) for some \(T\in (0, \theta )\), then w would be constant in a small interval around \(\sup _n T_n\) since

$$\begin{aligned} \bigcup _n w([0, T_n]) \ = \ \bigcup _n u([0, S_n]) \ = \ \mathscr {R}[u] \ = \ w([0, \theta )), \end{aligned}$$

in contradiction to \(T_\star (u) = +\infty\).

Applying (G4) to \(w_n(\cdot ) := w(\cdot, T_n, S_n)\), we obtain that there exists a subsequence \(n_k \uparrow +\infty\) and \(\bar{w}\in \mathscr {U}\) such that \(w_{n_k}(t) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} \bar{w}(t)\) for all \(t\ge 0\). It holds that \(\bar{w}(t) = w(t)\) for all \(t\in [0, \theta )\). As a member of \(\mathscr {U}\), the map \(\bar{w}\) is sequentially continuous in \((0, +\infty )\). Hence, the limit \(\lim _{t\uparrow \theta } w(t)\) exists and coincides with \(\bar{w}(\theta ) =: w_\star \in \mathscr {S}\). In particular,

$$\begin{aligned} u(S_n) \ = \ w(T_n) \ {\mathop {\rightarrow }\limits ^{\mathscr {S}}} \ w_\star \quad (n\rightarrow +\infty ). \end{aligned}$$

Since the sequence \(S_n\uparrow +\infty\) has been chosen arbitrarily, it follows that

$$\begin{aligned} u(t_n) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star \quad \text {whenever } t_n\rightarrow +\infty, \quad \bar{w}(t) = w_\star \quad \text {for all } t\ge \theta, \end{aligned}$$

which gives (H4).

The hypothesis (H5) directly follows from a simple application of (G4).

Finally, we prove (C). Let a sequence \(v_n\in \mathscr {T}[\mathscr {U}], \ n\in \mathbb {N},\) be given, satisfying \(\sup _n \rho (v_n) < +\infty\) and \(\mathscr {R}[v_n] = \mathscr {R}[v_1]\) for all \(n\in \mathbb {N}\). We may assume w.l.o.g. that \(T_n := \rho (v_n) \rightarrow T\) for some \(T\in [0, +\infty )\). We select \(\bar{v}_n\in \mathscr {U}\) such that \(\bar{v}_n(t) = v_n(t)\) for all \(t\in [0, T_n]\). We note that \(v_n(0) = v_1(0)\) and \(v_n(T_n) = v_1(T_1)\) by (H3). Due to (G4), there exists a subsequence \(n_k\uparrow +\infty\) and a solution \(\bar{v} \in \mathscr {U}\) such that \(\bar{v}_{n_k}(t) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} \bar{v}(t)\) for all \(t\in [0, +\infty )\). Since all the solutions are continuous in \((0, +\infty )\), this convergence is uniform in compact subsets of \((0, +\infty )\) by [5, Thm. 2.2]. Moreover, it holds that

$$\begin{aligned} \text {whenever} \quad \bar{v}_{n_k}(s_k)\in \mathscr {R}[v_1], \ s_k\rightarrow 0, \quad \text {then} \quad \bar{v}_{n_k}(s_k)\rightarrow \bar{v}(0). \end{aligned}$$
(4.1)

We prove (4.1) (cf. proof of [5], Thm. 2.3]):

Suppose that \((\bar{v}_{n_k}(s_k))_k\) does not converge to \(\bar{v}(0)\). Since \(\mathscr {R}[v_1]\) is sequentially compact, we may extract a convergent subsequence (still denoted by \(\bar{v}_{n_k}(s_k)\)) converging to some \(\bar{v}_0\in \mathscr {S}, \ \bar{v}_0\ne \bar{v}(0)\). For every \(t > 0\), we have \(\bar{v}_{n_k}(t + s_k) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} \bar{v}(t)\) by the uniform convergence in compact subsets of \((0, +\infty )\). Due to (G2) and (G4), the map \(w: [0, +\infty ) \rightarrow \mathscr {S}\),

$$\begin{aligned} w(r):= {\left\{ \begin{array}{ll} \bar{v}_0 &{}\text { if } r=0 \\ \bar{v}(r) &{}\text { if } r > 0 \end{array}\right. } \end{aligned}$$

belongs to \(\mathscr {U}\). As \(\bar{v}, w\in \mathscr {U}\) are sequentially continuous in \([0, +\infty )\), we obtain \(w(0) = \bar{v}(0)\), in contradiction to \(\bar{v}_0 \ne \bar{v}(0)\). This proves (4.1).

It follows that \(\bar{v}(T) = v_1(T_1)\) and \(v_{n_k}(t) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} v(t)\) for all \(t\in [0, +\infty )\), with \(v\in \mathscr {T}[\mathscr {U}]\) defined by \(v(t) := \bar{v}(t\wedge T)\) for all \(t\ge 0\). Moreover, as \(v_1\) is continuous, we have \(\mathscr {R}[v]\subset \mathscr {R}[v_1]\), and by the uniform convergence, we obtain that \(\mathscr {R}[v_1]\subset \mathscr {R}[v]\). Hence, \(\mathscr {R}[v] = \mathscr {R}[v_1]\), and the proof is complete. \(\square\)

Remark 4.3

Following the proof of Theorem 4.2 without assuming continuity of the solutions, it is not difficult to see that any generalized semiflow admitting a function \(\Psi\) as above (i.e. for which (L2) and (L3) hold) satisfies the hypotheses (H1)–(H3), (H5) and

  1. (h4)

    If \(u\in \mathscr {U}\) and there exists a map \(w: [0, \theta ) \rightarrow \mathscr {S}\) with \(\theta < +\infty\) such that \(w|_{[0, T]}\) can be extended to a map in \(\mathscr {U}\) for every \(T\in [0,\theta )\), and \(w([0, \theta )) = \mathscr {R}[u]\), then the \(\omega\)-limit set

    $$\begin{aligned} \omega (u):=\{w_\star \in \mathscr {S}\ | \ \exists t_n \rightarrow +\infty, \ u(t_n){\mathop {\rightarrow }\limits ^{\mathscr {S}}} w_\star \} \end{aligned}$$

    of u is nonempty and there exists a map \(\bar{w}: [0, +\infty ) \rightarrow \mathscr {S}\) in \(\mathscr {U}\) satisfying

    $$\begin{aligned} \bar{w}(t) = w(t) \quad \text {if } t < \theta, \quad \bar{w}(t) \in \omega (u) \quad \text {if } t\ge \theta. \end{aligned}$$

We notice that if \(\Psi\) is continuous, then \(\Psi\) is constant on \(\omega (u)\).

We note that the only point in the proof of Theorem 4.2 where the function \(\psi\) plays a role is when we prove (H3). Furthermore, the arguments in the proof of (H3) show that a generalized semiflow fails to satisfy (H3) if and only if it admits a nonconstant periodic orbit. So we obtain

Theorem 4.4

Let \(\mathscr {U}\) be a generalized semiflow on \(\mathscr {S}\).

If every solution \(u\in \mathscr {U}\) is sequentially continuous and satisfies (3.1), then \(\mathscr {U}\) is a generalized \(\Lambda\)-semiflow satisfying the compactness hypothesis (C) and all the statements (1)–(5) of Theorem 3.9 hold good for \(\mathscr {U}\).

If there exists a solution \(u\in \mathscr {U}\) which does not satisfy (3.1), then there exists a nonconstant solution \(v\in \mathscr {U}\) and \(\mu > 0\) such that \(v(r) = v(r + \mu )\) for all \(r\ge 0\).

Our next remark concerns the topological setting.

Remark 4.5

The theory of generalized semiflows has been developed by Ball [5, 6] for metric spaces. The only (but critical) point where we make explicit use of the metrizability of the topology is when we apply [5, Thm. 2.2] in the proof of the compactness hypothesis (C).

We conclude this section with a characterization of minimal solutions in terms of a function which decreases along solution curves.

Proposition 4.6

Let a topological space \(\mathscr {S}\) endowed with a Hausdorff topology be given. Let \(\mathscr {U}\) be a generalized \(\Lambda\)-semiflow on \(\mathscr {S}\) satisfying the compactness hypothesis (C). Suppose that every solution \(u\in \mathscr {U}\) is sequentially continuous (3.9) and that there exists a function \(\Psi : \mathscr {S}\rightarrow \mathbb {R}\) which decreases along solution curves, i.e.

$$\begin{aligned} \Psi (u(t)) \ \le \ \Psi (u(s)) \quad \text {for every} \quad 0\le s< t <+\infty, \quad u\in \mathscr {U}. \end{aligned}$$

Then the following holds:

Whenever \(u\in \mathscr {U}_{\text {min}}, \ v\in \mathscr {U}\) with \(u\in \mathscr {U}\,[\mathscr {R}[v]]\), then \(\Psi (u(t)) \le \Psi (v(t))\) for all \(t\in [0, +\infty )\).

Whenever \(u\in \mathscr {U}\) and \(\Psi\) is injective on \(\mathscr {R}[u]\), then u belongs to \(\mathscr {U}_{\text {min}}\) if \(\Psi (u(t)) \le \Psi (v(t))\) for every \(v\in \mathscr {U}\,[\mathscr {R}[u]]\) and \(t\in [0, +\infty )\).

Proof

The first statement directly follows from (3) in Theorem 3.9: indeed, if \(t\in [0, T_\star (u))\), then there exists \(\bar{t}\ge 0\) with \(u(t) = v(\bar{t})\) and applying (3) we obtain \(\bar{t} \ge t\) and hence \(\Psi (u(t)) = \Psi (v(\bar{t})) \le \Psi (v(t))\); if \(T_\star (u) < +\infty\) and \(t\ge T_\star (u)\), then \(\Psi (u(t)) = \min _{s\ge 0} \Psi (u(s)) \le \inf _{s\ge 0} \Psi (v(s)) \le \Psi (v(t))\).

Now, we prove the second statement. Let \(u\in \mathscr {U}\) be given and assume that \(\Psi\) is injective on \(\mathscr {R}[u]\) and that \(\Psi (u(t)) \le \Psi (v(t))\) for every \(v\in \mathscr {U}\,[\mathscr {R}[u]]\) and \(t\in [0, +\infty )\). We note that \(\Psi\) is injective on \(\mathscr {R}[v]\) for every \(v\in \mathscr {U}\,[\mathscr {R}[u]]\): just suppose that there exist \(v\in \mathscr {U}\,[\mathscr {R}[u]]\) with \(T_\star (v) < +\infty\) and \(\bar{t} < T_\star (v)\) with \(\Psi (v(\bar{t})) = \Psi (v(T_\star (v))) = \min _{s\ge 0} \Psi (v(s))\), then \(\Psi \circ v\) is constant in the interval \([\bar{t}, T_\star (v)]\), in contradiction to \(\bar{t} < T_\star (v)\) and \(\Psi\) injective on \(\mathscr {R}[u]\). The claim follows. Suppose now that \(u\succ v\) for some \(v\in \mathscr {U}\). Then there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) such that \(u(t) = v({\mathsf {z}}(t))\) for all \(t\in [0, +\infty )\). It holds that \({\mathsf {z}}(t) \le t\) for all \(t\ge 0\) and \(v\in \mathscr {U}\,[\mathscr {R}[u]]\). Hence, \(\Psi (u(t)) = \Psi (v({\mathsf {z}}(t))) \ge \Psi (v(t)) \ge \Psi (u(t))\) for all \(t\in [0, +\infty )\). This yields \(u(t) = v(t)\) for all \(t\in [0, +\infty )\) and the proof is complete. \(\square\)

Remark 4.7

We do not make use of (C) and (3.9) in the proof of the second statement of Proposition 4.6 (cf. Remark 3.10).

5 Minimal solutions to gradient flows

It is known that gradient flows can be studied within the framework of generalized semiflows [23]. However, our approach to apply the theory of minimal solutions to gradient flows in metric spaces is independent of Sect. 4. The special structure of the energy dissipation inequality allows of taking into consideration cases in which the gradient flow for a functional does not fit into the concept of generalized semiflow due to the lack of compactness but still is a generalized \(\Lambda\)-semiflow.

We find a particular feature of the minimal solutions to a gradient flow: they cross the critical set

$$\begin{aligned} \{x\in \mathscr {S}\ | \ g(x) = 0\} \end{aligned}$$

of the functional with respect to the corresponding upper gradient g only in a negligible set of times before they possibly become eventually constant.

5.1 Curves of maximal slope

We give some of the basic definitions concerning gradient flows in metric spaces, following the fundamental book by Ambrosio, Gigli and Savaré [2]:

Let \((\mathscr {S}, d)\) be a complete metric space and let the notation \({\mathop {\rightarrow }\limits ^{\mathscr {S}}}\) correspond to the convergence in the metric d, i.e.

$$\begin{aligned} x_j {\mathop {\rightarrow }\limits ^{\mathscr {S}}} x \quad :\Leftrightarrow \quad d(x_j, x) \rightarrow 0 \end{aligned}$$

for \(x_j, x\in \mathscr {S}\).

So-called curves of maximal slope are defined for an extended real functional \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) with proper effective domain

$$\begin{aligned} D(\phi ) := \{\phi < +\infty \} \ \ne \emptyset. \end{aligned}$$

The notion of curves of maximal slope goes back to [9], with further developments in [10, 18].

Locally absolutely continuous curve

Definition 5.1

We say that a curve \(v: \ [0, +\infty ) \rightarrow \mathscr {S}\) is locally absolutely continuous and write \(v\in AC_{\text {loc}}([0, +\infty );\mathscr {S})\) if there exists \(m\in L_{\text {loc}}^1(0,+\infty )\) such that

$$\begin{aligned} d(v(s),v(t)) \le \int ^{t}_{s}{m(r) \ dr} \quad \text {for all } 0 \le s\le t < +\infty. \end{aligned}$$

In this case, the limit

$$\begin{aligned} |v'|(t) := \mathop {\lim }_{s\rightarrow t} \frac{d(v(s),v(t))}{|s-t|} \end{aligned}$$

exists for \(\mathscr {L}^1\)-a.e. \(t\in (0, +\infty )\), the function \(t \mapsto |v'|(t)\) belongs to \(L_{\text {loc}}^1(0, +\infty )\) and is called the metric derivative of v. The metric derivative is \(\mathscr {L}^1\)-a.e. the smallest admissible function m in the definition above.

Strong upper gradient

Definition 5.2

A function \(g: \mathscr {S}\rightarrow [0, +\infty ]\) is a strong upper gradient for the functional \(\phi\) if for every \(v\in AC_{\text {loc}}([0,+\infty );\mathscr {S})\) the function \(g\circ v\) is Borel and

$$\begin{aligned} |\phi (v(t)) - \phi (v(s))| \le \int ^{t}_{s}{g(v(r))|v'|(r) \ dr} \quad \text {for all } 0 \le s\le t < +\infty. \end{aligned}$$
(5.1)

In particular, if \(g\circ v |v'| \in L_{\text {loc}}^1(0,+\infty )\) then \(\phi \circ v\) is locally absolutely continuous and

$$\begin{aligned} |(\phi \circ v)'(t)| \le g(v(t))|v'|(t) \quad \text {for } \mathscr {L}^1\text {-a.e. } t\in (0,+\infty ). \end{aligned}$$

This slightly modified version of [2, Def. 1.2.1] (which requires (5.1) only for \(s > 0\)) can be found in [23].

In [2], also the concept of weak upper gradient is defined. The notion of upper gradient is an abstraction of the modulus of the gradient to a general metric and nonsmooth setting.

Curve of maximal slope

Definition 5.3

Let \(g: \mathscr {S}\rightarrow [0, +\infty ]\) be a strong or weak upper gradient for the functional \(\phi\), \({\text { and } p\in (1,+\infty ) \text { with conjugate exponent } q }\).

A locally absolutely continuous curve \(u: [0, +\infty ) \rightarrow \mathscr {S}\) is called a p-curve of maximal slope for \(\phi\) with respect to its upper gradient g if \(\phi \circ u\) is \(\mathscr {L}^1\)-a.e. equal to a decreasing map \(\varphi : [0, +\infty ) \rightarrow \mathbb {R}\), i.e.

$$\begin{aligned} \phi (u(r)) = \varphi (r) \text { for } \mathscr {L}^1\text {-a.e. } r\ge 0, \quad \varphi (t) \le \varphi (s) \text { for all } 0\le s< t <+\infty, \end{aligned}$$

and the energy dissipation inequality

$$\begin{aligned} \varphi (s) - \varphi (t) \ \ge \ \frac{1}{q} \int ^{t}_{s}{g^q (u(r)) \ dr} \ + \ \frac{1}{p} \int ^{t}_{s}{|u'|^p (r) \ dr} \end{aligned}$$

is satisfied for all \(0\le s \le t < +\infty\).

Typical candidates for g are the local slope

$$\begin{aligned} |\partial \phi |(x) := \mathop {\limsup }_{d(y,x) \rightarrow 0} \frac{(\phi (x)-\phi (y))^+}{d(x,y)} \quad (x\in D(\phi )), \end{aligned}$$

the relaxed slope

$$\begin{aligned} |\partial ^- \phi |(x) := \inf \left\{ \mathop {\liminf }_{j\rightarrow \infty } |\partial \phi |(x_j): \ d(x_j,x)\rightarrow 0, \ \sup _{j} \phi (x_j) < +\infty \right\} \end{aligned}$$

and similar modifications of the lower semicontinuous envelope of the local slope [2, 10, 18, 22, 23].

Remark 5.4

If \(\mathscr {S}= \mathbb {R}^d\) and \(\phi : \mathbb {R}^d \rightarrow \mathbb {R}\) is a continuously differentiable Lipschitz function, then \(g:=|\nabla \phi | = |\partial \phi | = |\partial ^-\phi |\) is a strong upper gradient for \(\phi\), the energy dissipation inequality \({\text {for } p=2}\) is equivalent to the classical gradient flow equation

$$\begin{aligned} u'(t) \ = \ -\nabla \phi (u(t)), \quad t > 0, \end{aligned}$$

and admits at least one solution for every initial value.

Definition of \(\mathscr {U}_p(\phi, g)\) It is usually not clear a priori whether a candidate function \(g: \mathscr {S}\rightarrow [0, +\infty ]\) is an upper gradient or not (except that the local slope is a weak upper gradient [2]).

Our analysis of gradient flows with regard to our concept of generalized \(\Lambda\)-semiflow and minimal solutions will not rely on the behaviour of g as a strong or weak upper gradient. Our considerations will concern locally absolutely continuous curves satisfying the energy dissipation inequality for some given function \(g: \mathscr {S}\rightarrow [0, +\infty ]\) without specifying the role g plays for the functional \(\phi\).

In view of the concatenation hypothesis (H2), we assume that the energy dissipation inequality holds everywhere for \(\varphi = \phi \circ u\).

Definition 5.5

Let \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) and \(g: \mathscr {S}\rightarrow [0, +\infty ]\) be given, \({\text {and } p\in (1,+\infty ) \text { with conjugate exponent } q}\). We define \(\mathscr {U}_p(\phi, g)\) as the family of all the locally absolutely continuous curves \(u\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\) with \(u(0)\in D(\phi )\), satisfying the energy dissipation inequality

$$\begin{aligned} \phi (u(s)) - \phi (u(t)) \ \ge \ \frac{1}{q} \int ^{t}_{s}{g^q (u(r)) \ dr} \ + \ \frac{1}{p} \int ^{t}_{s}{|u'|^p (r) \ dr} \end{aligned}$$
(5.2)

for all \(0\le s \le t < +\infty\).

If g is a weak or strong upper gradient for \(\phi\) and \(u\in \mathscr {U}_p(\phi, g)\), then u is a p-curve of maximal slope for \(\phi\) w.r.t. g.

Remark 5.6

In Definition 5.5, we tacitly assume that \(g\circ u\) is Borel; otherwise the integral on the right-hand side would be set \(+\infty\).

Example of a nonempty family \(\mathscr {U}_p(\phi, g)\) The following existence result is provided in [2], the proof of which is based on the notion of minimizing movements [8]: Suppose that the functional \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) is lower semicontinuous, i.e.

$$\begin{aligned} d(x_j, x) \rightarrow 0 \quad \Rightarrow \quad \mathop {\liminf }_{j\rightarrow \infty } \phi (x_j) \ge \phi (x), \end{aligned}$$
(5.3)

\({\text {has a lower bound of order }p}\), i.e. there exist \(A, B > 0, \ x_{\star }\in \mathscr {S}\) such that

$$\begin{aligned} \phi (\cdot ) \ge -A -B d^p(\cdot, x_{\star }), \end{aligned}$$
(5.4)

and suppose that d-bounded subsets of a sublevel of \(\phi\) are relatively compact, i.e.

$$\begin{aligned} \sup _{j,l}\{d(x_j,x_l), \phi (x_j)\} < +\infty \ \Rightarrow \ \exists \ j_k \uparrow +\infty, \ x\in \mathscr {S}: \ d(x_{j_k}, x) \rightarrow 0. \end{aligned}$$
(5.5)

Further, suppose that \(g:= |\partial ^-\phi |\) is a strong upper gradient for \(\phi\). Then the following holds [2]: for every \(u_0\in D(\phi )\), there exists at least one p-curve u of maximal slope for \(\phi\) w.r.t. \(|\partial ^-\phi |\), with initial value \(u(0) = u_0\), the energy dissipation inequality (5.2) holds (in fact, equality holds in (5.2)) and \(u\in \mathscr {U}_p(\phi, |\partial ^-\phi |)\).

Remark 5.7

Whenever \(g: \mathscr {S}\rightarrow [0, +\infty ]\) is a strong upper gradient for a functional \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\), and there exists a p-curve u of maximal slope for \(\phi\) w.r.t. g, it follows from Definition 5.2 of strong upper gradient that \(u\in \mathscr {U}_p(\phi, g)\) (with equality in (5.2)) and \(\phi \circ u\) is locally absolutely continuous.

The family \(\mathscr {U}_p(\phi, g)\) then coincides with the collection of all the p-curves of maximal slope for \(\phi\) w.r.t. g.

5.2 Gradient flow as generalized \(\Lambda\)-semiflow

We want to prove that \(\mathscr {U}_p(\phi, g)\) is a generalized \(\Lambda\)-semiflow:

Theorem 5.8

Let \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) and \(g: \mathscr {S}\rightarrow [0, +\infty ]\) be given, \({\text {and } p\in (1,+\infty )}\). We assume that \(\phi\) and g are lower semicontinuous, i.e.

$$\begin{aligned} d(x_j, x)\rightarrow 0 \quad \Rightarrow \quad \liminf _{j\rightarrow +\infty } \phi (x_j) \ge \phi (x), \quad \liminf _{j\rightarrow +\infty } g(x_j)\ge g(x), \end{aligned}$$
(5.6)

and \(\phi\) \({\text {has a lower bound of order } p}\), i.e. there exist \(A, B > 0, \ x_\star \in \mathscr {S}\) such that

$$\begin{aligned} \phi (\cdot ) \ge -A -Bd^p(\cdot, x_\star ), \end{aligned}$$
(5.7)

and we suppose that \(\mathscr {U}_p(\phi, g)\ne \emptyset\). Then \(\mathscr {U}_p(\phi, g)\) is a generalized \(\Lambda\)-semiflow, according to Definition 3.1.

Proof

\({\text {Let } q \text { denote the conjugate exponent of } p}\).

We first note that if g is lower semicontinuous, then \(g\circ u\) is Borel for every curve \(u\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\).

The hypothesis (H1) follows by the classical change of variables formula: if \(u\in \mathscr {U}_p(\phi, g)\) and \(\tau \ge 0\), then \(u_\tau (\cdot ) := u(\cdot + \tau ) \in AC_{\text {loc}}([0, +\infty );\mathscr {S})\) with metric derivative \(|u_\tau '|(\cdot ) = |u'|(\cdot + \tau )\) and

$$\begin{aligned} \phi (u_\tau (s)) - \phi (u_\tau (t))\ge & {} \frac{1}{q}\int _{s+\tau }^{t+\tau }{g^q(u(r)) \ dr} + \frac{1}{p}\int _{s+\tau }^{t+\tau }{|u'|^p(r) \ dr} \\\ge & {} \frac{1}{q}\int _{s}^{t}{g^q(u_\tau (r)) \ dr} + \frac{1}{p}\int _{s}^{t}{|u_\tau '|^p(r) \ dr}. \end{aligned}$$

Similarly, we show (H2). Let \(u, v\in \mathscr {U}_p(\phi, g)\) with \(v(0) = u(\bar{t})\) for some \(\bar{t}\ge 0\) and define \(w: [0, +\infty ) \rightarrow \mathscr {S}\),

$$\begin{aligned} w(t) := {\left\{ \begin{array}{ll} u(t) &{}\text { if } t\le \bar{t} \\ v(t-\bar{t}) &{}\text { if } t > \bar{t} \end{array}\right. } \end{aligned}$$

Clearly, \(w\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\) with

$$\begin{aligned} |w'|(r) = {\left\{ \begin{array}{ll} |u'|(r) &{}\text { if } r\le \bar{t} \\ |v'|(r-\bar{t}) &{}\text { if } r > \bar{t} \end{array}\right. } \end{aligned}$$

and the energy dissipation inequality (5.2) directly follows for \(0\le s \le t \le \bar{t}\) and by change of variable as above, for \(\bar{t} \le s \le t < +\infty\). If \(0\le s< \bar{t} < t\), we obtain (5.2) by splitting up

$$\begin{aligned} \phi (w(s)) - \phi (w(t)) \ = \ \phi (w(s)) - \phi (w(\bar{t})) + \phi (w(\bar{t})) - \phi (w(t)). \end{aligned}$$

This shows (H2).

Now, let a map \(u: [0, +\infty ) \rightarrow \mathscr {S}\) be given with the property that \(u|_{[0, T]}\) can be extended to a map in \(\mathscr {U}_p(\phi, g)\) for all \(T > 0\), i.e. for every \(T > 0\) there exists \(w_T \in \mathscr {U}_p(\phi, g)\) with \(w_T(t) = u(t) \text { if } t\le T\). In particular, it holds that \(u\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\) and \(|w_T'|(\cdot ) = |u'|(\cdot )\) in (0, T). Hence, \(u\in \mathscr {U}_p(\phi, g)\). This shows that \(\mathscr {U}_p(\phi, g)\) satisfies (H5).

Obviously, \(\mathscr {U}_p(\phi, g)\) satisfies (H3).

It remains to prove (H4). Let \(u\in \mathscr {U}_p(\phi, g)\). Suppose that there exists a map \(w: [0, \theta ) \rightarrow \mathscr {S}\) with \(\theta < +\infty\) and \(w([0, \theta )) = \mathscr {R}[u]\) such that \(w|_{[0, T]}\) can be extended to a map in \(\mathscr {U}_p(\phi, g)\) for all \(T\in [0, \theta )\). Then \(w\in AC([0, T]; \mathscr {S})\) for every \(T \in (0, \theta )\), i.e. the metric derivative

$$\begin{aligned} |w'|(t):= \mathop {\lim }_{s\rightarrow t} \frac{d(w(s),w(t))}{|s-t|} \end{aligned}$$

exists for \(\mathscr {L}^1\)-a.e. \(t\in (0, \theta )\) and \(t\mapsto |w'|(t)\) belongs to \(L^1([0, T]; \mathscr {S})\) for every \(T\in (0, \theta )\), and

$$\begin{aligned} d(w(s),w(t)) \le \int ^{t}_{s}{|w'|(r) \ dr} \quad \text {for all } 0 \le s\le t < \theta. \end{aligned}$$

Moreover, \(w(0) \in D(\phi )\) and the energy dissipation inequality

$$\begin{aligned} \phi (w(s)) - \phi (w(t)) \ \ge \ \frac{1}{q} \int _s^t{g^q(w(r)) \ dr} \ + \ \frac{1}{p}\int _s^t{|w'|^p(r) \ dr} \end{aligned}$$
(5.8)

holds for all \(0\le s \le t < \theta\). By assumption, there exist \(A, B > 0, \ x_\star \in \mathscr {S}\) such that

$$\begin{aligned} \phi (\cdot ) \ge -A - B d^p(\cdot, x_\star ). \end{aligned}$$

We set

$$\begin{aligned} \xi (t):= \phi (w(t)) + 2B d^p(w(t), x_\star ) + A \quad \text {for } t\in [0, \theta ). \end{aligned}$$

It holds that \(\xi\) is nonnegative and Borel (since \(\phi\) is lower semicontinuous) and for every \(t\in [0, \theta )\), the map \(\xi\) is bounded from above in [0, t] and

$$\begin{aligned} \xi (t)\le & {} \xi (0) - \frac{1}{p} \int _0^t{|w'|^p(r) \ dr} + 2Bp \int _0^t{d^{p-1}(w(r), x_\star )|w'|(r) \ dr} \\\le & {} \xi (0) + \frac{(2Bp)^q}{q} \int _0^t{d^p(w(r), x_\star ) \ dr} \\\le & {} \xi (0) + \frac{(2p)^q}{q}B^{q-1}\int _0^t{\xi (r) \ dr}. \end{aligned}$$

We used the fact that \([0, t] \ni r \mapsto d^p(w(r), x_\star )\) is absolutely continuous due to the chain rule for BV functions [1, Thm. 3.99]: indeed, the map \([0, t] \ni r\mapsto \eta (r):=d(w(r), x_\star )\) is absolutely continuous with

$$\begin{aligned} |\eta (r_2) - \eta (r_1)| \le d(w(r_1), w(r_2)) \le \int _{r_1}^{r_2}{|w'|(r) \ dr} \quad \text { for } 0\le r_1 \le r_2 \le t \end{aligned}$$

and bounded in [0, t], so we may apply [1, Thm. 3.99] to \(\eta ^p\) and obtain

$$\begin{aligned} |d^p(w(r_2), x_\star ) - d^p(w(r_1), x_\star )| \le \int _{r_1}^{r_2}{p d^{p-1}(w(r), x_\star )|w'|(r) \ dr} \ (0\le r_1 \le r_2 \le t). \end{aligned}$$

Applying the integral form of Gronwall’s inequality (see e.g. [11, Appendix B]) to \(\xi\) \({\text {and setting } C:=(2p)^q B^{q-1}/q }\), we obtain

$$\begin{aligned} \xi (t) \ \le \ \xi (0)(1+Ct e^{Ct}) \ \le \ \underbrace{\xi (0)(1+C\theta e^{C\theta })}_{=: \xi _{0, \theta }} \quad \text { for all } t\in [0, \theta ). \end{aligned}$$
(5.9)

In particular,

$$\begin{aligned} Bd^p(w(t), x_\star ) \ \le \ \xi (t) \ \le \ \xi _{0, \theta }, \quad \phi (w(t)) \ \ge \ -A - \xi _{0, \theta } \ > \ -\infty \end{aligned}$$
(5.10)

for all \(t\in [0, \theta )\). By Hölder inequality, it follows from (5.8) and (5.10) that

$$\begin{aligned} d(w(s), w(t)) \ \le \ (t-s)^{\frac{1}{q}} (p\phi (w(0)) + pA + p\xi _{0, \theta })^{\frac{1}{p}} \quad \text {for all } 0\le s \le t < \theta. \end{aligned}$$

Since \(\mathscr {S}\) is complete, this shows that the limit \(\lim _{t\uparrow \theta } w(t) =: w_\star \in \mathscr {S}\) exists.

If \(T_\star (u) < +\infty\), then \(u(t) = w_\star\) for all \(t\in [T_\star (u), +\infty )\) and there exists \(T\in [0, \theta )\) such that \(w(t) = w_\star\) for all \(T\le t < \theta\); nothing remains to be shown in this case.

If \(T_\star (u) = +\infty\) and \(S_n\uparrow +\infty\), there exists a corresponding increasing sequence \(T_n\uparrow \theta\) with \(w(T_n) = u(S_n)\); this follows from (H3) (cf. proof of Theorem 4.2). So we obtain

$$\begin{aligned} d(u(t_n), w_\star ) \rightarrow 0 \quad \text {whenever } t_n \rightarrow +\infty. \end{aligned}$$

Moreover, since \(u\in \mathscr {U}_p(\phi, g)\) satisfies the energy dissipation inequality (5.2) and \(\inf _{t\ge 0} \phi (u(t)) = \inf _{t\in [0, \theta )}\phi (w(t)) > -\infty\) by (5.10), it holds that

$$\begin{aligned} \int _0^{+\infty }{g^q(u(r)) \ dr} < +\infty. \end{aligned}$$

Hence, \(\liminf _{r\rightarrow +\infty } g(u(r)) = 0\) and we obtain

$$\begin{aligned} g(w_\star ) = 0 \end{aligned}$$
(5.11)

by the lower semicontinuity of g. Further, for \(s\in [0, \theta )\), the energy dissipation inequality

$$\begin{aligned} \phi (w(s)) - \phi (w_\star ) \ \ge \ \frac{1}{q} \int _s^\theta {g^q(w(r)) \ dr} \ + \ \frac{1}{p}\int _s^\theta {|w'|^p(r) \ dr} \end{aligned}$$
(5.12)

follows from (5.8) and the lower semicontinuity of \(\phi\).

We define \(\bar{w}: [0, +\infty ) \rightarrow \mathscr {S}\),

$$\begin{aligned} \bar{w}(t):= {\left\{ \begin{array}{ll} w(t) &{}\text { if } t < \theta \\ w_\star &{}\text { if } t\ge \theta \end{array}\right. } \end{aligned}$$

Clearly, \(\bar{w}\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\), and by (5.8), (5.12) and (5.11), it holds that \(\bar{w}\in \mathscr {U}_p(\phi, g)\).

The proof is complete. \(\square\)

The assumptions (5.6) and (5.7) on \(\phi\) and g in Theorem 5.8 are only used in the proof of (H4). The lower semicontinuity hypotheses on \(\phi\) and g allow the passage to the limit in the energy dissipation inequality and are natural assumptions whenever some kind of limit behaviour concerning the energy dissipation inequality is of interest (cf. the long-time analysis for gradient flows in metric spaces in [7, 23]). This will again be the case in the proof of (C) in Sect. 5.3.

We note that we do not need to require any compactness property of \(\phi\) such as (5.5); the \({\text {lower bound}}\) (5.7) suffices for our purposes. Also, the existence of a minimal reparametrization corresponding to a given solution will be proved without assuming any compactness property of \(\phi\). In fact, our assumption that \(\mathscr {U}_p(\phi,g)\) is nonempty is the only point at which some compactness property may play a role (cf. Sect. 5.1).

5.3 Minimal gradient flow

In this section, we study minimal solutions to gradient flows. Our aim is to apply Theorem 3.9 and Proposition 4.6 to \(\mathscr {U}= \mathscr {U}_p(\phi, g)\), providing existence and features of minimal solutions. Moreover, we will see that minimal solutions \(u\in \mathscr {U}_{\text {min}}\) to gradient flows are characterized by the particular property that

$$\begin{aligned} \mathscr {L}^1(\{t\in [0, T_\star (u)) \ | \ g(u(t)) = 0\}) = 0. \end{aligned}$$

Let \(\phi : \mathscr {S}\rightarrow (-\infty, +\infty ]\) and \(g:\mathscr {S}\rightarrow [0, +\infty ]\) be given, \({\text {and } p\in (1,+\infty )}\). Throughout this section, we assume that \(\phi\) and g are lower semicontinuous (5.6) and \(\phi\) \({\text {has a lower bound}}\) (5.7) \({\text {of order }p}\), and we define \(\mathscr {U}_p(\phi, g)\) as in Definition 5.5. Due to Theorem 5.8, the family \(\mathscr {U}_p(\phi, g)\) is a generalized \(\Lambda\)-semiflow on \(\mathscr {S}\) (provided it is nonempty).

We want to prove that \(\mathscr {U}_p(\phi, g)\) satisfies (C). The critical point is a passage to the limit in the energy dissipation inequality, as in the proof of (H4). The passage to the limit will now concern both terms on the left-hand side of the energy dissipation inequality (5.2) so that the lower semicontinuity of \(\phi\) will not suffice. Such obstacles are usually overcome by assuming that g is a strong upper gradient for \(\phi\) or by allowing any decreasing function \(\varphi \ge \phi \circ u\) in a modified energy dissipation inequality with pairs \((u, \varphi )\) as solutions (cf. [23]).

For our purposes, it is sufficient to assume that \(\phi \circ u: [0, +\infty ) \rightarrow \mathbb {R}\) is continuous for every solution \(u\in \mathscr {U}_p(\phi, g)\). This is satisfied, e.g. if g is a strong upper gradient (cf. Remark 5.7).

Theorem 5.9

Let the assumptions of Theorem 5.8 be satisfied and suppose that

$$\begin{aligned} \phi \circ u: [0, +\infty ) \rightarrow \mathbb {R} \quad \text {is continuous for every} \quad u\in \mathscr {U}_p(\phi, g). \end{aligned}$$
(5.13)

Then the generalized \(\Lambda\)-semiflow \(\mathscr {U}_p(\phi, g)\) satisfies (C) and all the statements (1)–(5) of Theorem 3.9 hold good for \(\mathscr {U}_p(\phi, g)\). Moreover, both statements of Proposition 4.6 are applicable to \(\mathscr {U}_p(\phi, g)\).

Proof

We write \(\mathscr {U}=\mathscr {U}_p(\phi, g)\). Let a sequence \(v_n\in \mathscr {T}[\mathscr {U}], \ n\in \mathbb {N},\) be given with \(\sup _n\rho (v_n) < +\infty\) and \(\mathscr {R}[v_n] = \mathscr {R}[v_1]\) for all \(n\in \mathbb {N}\). Since the truncated solution \(v_1\) is continuous with \(T_1:=\rho (v_1) < +\infty\), its range \(\mathscr {R}[v_1]\) is sequentially compact. Furthermore, it is straightforward to check that

$$\begin{aligned} \sup _{n\in \mathbb {N}}\int _0^{+\infty }{|v_n'|^p(r) \ dr} \ \le \ p(\phi (v_1(0)) - \phi (v_1(T_1))) \ < \ +\infty. \end{aligned}$$

Applying a refined version of Ascoli-Arzelà theorem [2, Prop. 3.3.1], we obtain that there exist a subsequence \(n_k\uparrow +\infty\) and a curve \(v: [0, +\infty ) \rightarrow \mathscr {S}\) such that

$$\begin{aligned} v_{n_k}(t) {\mathop {\rightarrow }\limits ^{\mathscr {S}}} v(t) \quad \text {for all } t\in [0, +\infty ). \end{aligned}$$
(5.14)

It is not difficult to see that \(v\in AC_{\text {loc}}([0, +\infty ); \mathscr {S})\) and

$$\begin{aligned} \int _{s}^{t}{|v'|^p (r) \ dr} \ \le \ \liminf _{k\rightarrow +\infty }\int _s^t{|v_{n_k}'|^p(r) \ dr} \quad \text {for all } 0\le s\le t <+\infty. \end{aligned}$$
(5.15)

We may assume w.l.o.g. that \(T_{n_k} := \rho (v_{n_k}) \rightarrow T\) for some \(T\in [0, +\infty )\). For every \(t\in [0, +\infty )\), there exists a sequence of times \(t_k\in [0, T_1]\) such that \(v_1(t_k) = v_{n_k}(t)\). It follows from this and from (5.14) and (5.13) that \(\mathscr {R}[v]\subset \mathscr {R}[v_1]\) and

$$\begin{aligned} \phi (v_{n_k}(t)) \rightarrow \phi (v(t)) \quad \text {for all } t\in [0, +\infty ), \quad v(t) = v_1(T_1) \quad \text {for all } t\ge T. \end{aligned}$$

We obtain

$$\begin{aligned} \phi (v(s)) - \phi (v(t))= & {} \lim _{k\rightarrow +\infty } (\phi (v_{n_k}(s)) - \phi (v_{n_k}(t))) \\\ge & {} \liminf _{k\rightarrow +\infty } \frac{1}{q}\int _s^t{g^q(v_{n_k}(r)) \ dr} + \liminf _{k\rightarrow +\infty } \frac{1}{p} \int _s^t{|v_{n_k}'|^p(r) \ dr} \\\ge & {} \frac{1}{q}\int _s^t{g^q(v(r)) \ dr} + \frac{1}{p} \int _s^t{|v'|^p(r) \ dr} \end{aligned}$$

for all \(0\le s\le t < T\), due to the fact that \(v_{n_k}\) satisfies (5.2) in \([0, T_{n_k}]\) and due to (5.15), the lower semicontinuity (5.6) of g and Fatou’s lemma.

Since v is continuous, \(\mathscr {R}[v]\subset \mathscr {R}[v_1]\) and (5.13) holds, the map \(\phi \circ v\) is continuous. It follows that \(\mathscr {R}[v] = \mathscr {R}[v_1]\) since \(v(0) = v_1(0)\), \(v(T) = v_1(T_1)\), \(\phi (\mathscr {R}[v_1]) = [\phi (v_1(T_1)), \phi (v_1(0))]\) and \(\phi\) injective on \(\mathscr {R}[v_1]\) (cf. Remark 5.10); further, the energy dissipation inequality

$$\begin{aligned} \phi (v(s)) - \phi (v(t)) \ \ge \ \frac{1}{q}\int _s^t{g^q(v(r)) \ dr} + \frac{1}{p} \int _s^t{|v'|^p(r) \ dr} \end{aligned}$$

holds for all \(0\le s \le t \le T\).

Now, let \(\bar{v}_1\in \mathscr {U}\) such that \(v_1(\cdot ) = \bar{v}_1(\cdot \wedge T_1)\). Similar arguments as in the proof of Theorem 5.8, (H2), show that \(\bar{v}\in \mathscr {U}\), where \(\bar{v}: [0, +\infty ) \rightarrow \mathscr {S}\) is defined as

$$\begin{aligned} \bar{v}(t) := {\left\{ \begin{array}{ll} v(t) &{}\text { if } 0\le t \le T \\ \bar{v}_1(t-T + T_1) &{}\text { if } t > T \end{array}\right. } \end{aligned}$$

Hence \(v\in \mathscr {T}[\mathscr {U}]\). The proof of (C) is complete. \(\square\)

Remark 5.10

Proposition 4.6 is applicable with \(\psi := \phi\); it is true that \(\phi\) may take the value \(+\infty\) but for the statements of Proposition 4.6 to hold good, it suffices that \(\phi (u(t))\le \phi (u(s)) < +\infty\) for all \(0\le s\le t < +\infty\), \(u\in \mathscr {U}_p(\phi, g)\). We note that \(\phi\) is injective on \(\mathscr {R}[u]\) for every \(u\in \mathscr {U}_p(\phi, g)\). To be more precise: The energy dissipation inequality (5.2) implies that for every \(u\in \mathscr {U}_p(\phi, g)\) and \(0\le s \le t < +\infty\) the following four points are equivalent:

  1. (i)

    \(\phi (u(s)) = \phi (u(t))\),

  2. (ii)

    \(|u'|(r) = 0\) for \(\mathscr {L}^1\)-a.e. \(r\in (s, t)\),

  3. (iii)

    \(u(r) = u(s)\) for all \(r\in [s, t]\),

  4. (iv)

    \(u(s) = u(t)\).

Moreover, we note that for \(\mathscr {U}=\mathscr {U}_p(\phi, g)\) and the range \(\mathscr {R}=\mathscr {R}[y]\subset \mathscr {S}\) of a solution \(y\in \mathscr {U}\), it holds that

$$\begin{aligned} w\in \mathscr {U}\,[\mathscr {R}] \quad \Leftrightarrow \quad w\in \mathscr {U}, \ \mathscr {R}\subset \mathscr {R}[w] \subset \overline{\mathscr {R}}. \end{aligned}$$

If g is a strong upper gradient for \(\phi\), then \(\mathscr {U}_p(\phi, g)\) coincides with the collection of all the p-curves of maximal slope for \(\phi\) w.r.t. g (cf. Remark 5.7). The next proposition deals with a special quality of minimal solutions to a gradient flow in terms of the 0 level set of the corresponding strong upper gradient.

Proposition 5.11

(cf. [13], Thm. 3.4 (5)) Let the assumptions of Theorem 5.8 be satisfied and suppose that g is a strong upper gradient for \(\phi\). Then the following two statements are equivalent for a solution \(u\in \mathscr {U}_p(\phi, g)\):

  1. (i)

    u is minimal,

  2. (ii)

    u crosses the set \(\{x\in \mathscr {S}\ | \ g(x) = 0\}\) of critical points of \(\phi\) w.r.t. its upper gradient g in an \(\mathscr {L}^1\)-negligible set of times, i.e.

    $$\begin{aligned} \mathscr {L}^1(\{t\in [0, T_\star (u)): \ g(u(t)) = 0\}) = 0. \end{aligned}$$
    (5.16)

Proof

First we notice some properties of \(\mathscr {U}_p(\phi, g)\) if g is a strong upper gradient (cf. Remark 5.7): every solution \(u\in \mathscr {U}_p(\phi, g)\) satisfies

$$\begin{aligned} \phi (u(s)) - \phi (u(t)) \ = \ \frac{1}{q}\int _s^t{g^q(u(r)) \ dr} \ + \ \frac{1}{p}\int _s^t{|u'|^p(r) \ dr} \end{aligned}$$
(5.17)

for all \(0\le s\le t < +\infty\), it holds that

$$\begin{aligned} g^q(u(r)) \ = \ |u'|^p(r) \quad \text {for } \mathscr {L}^1\text {-a.e. } r\in [0, +\infty ), \end{aligned}$$
(5.18)

and \(\phi \circ u\) is locally absolutely continuous with

$$\begin{aligned} (\phi \circ u)'(r) \ = \ -g^q(u(r)) \ = \ -|u'|^p(r) \quad \text {for } \mathscr {L}^1\text {-a.e. } r\in [0, +\infty ). \end{aligned}$$
(5.19)

Let us show that (ii) implies (i). Let \(u\in \mathscr {U}\) satisfy (5.16) and suppose that \(u\succ v\) for some \(v\in \mathscr {U}_p(\phi, g)\). Then \(\mathscr {R}[v]\subset \overline{\mathscr {R}[u]}\) and there exists an increasing 1-Lipschitz map \({\mathsf {z}}: [0, +\infty ) \rightarrow [0, +\infty )\) with \({\mathsf {z}}(0) = 0\) such that \(u(t) = v({\mathsf {z}}(t))\) for all \(t\ge 0\). The map \({\mathsf {z}}\) is differentiable \(\mathscr {L}^1\)-a.e. in \([0, +\infty )\) and the chain rule for absolutely continuous functions (see e.g. [17], Thm. 3.44) and (5.19) yield

$$\begin{aligned} g^q(u(r)) = -(\phi \circ u)'(r) = -(\phi \circ v \circ {\mathsf {z}})'(r) = g^q(v({\mathsf {z}}(r))){\mathsf {z}}'(r) = g^q(u(r)) {\mathsf {z}}'(r) \end{aligned}$$

for \(\mathscr {L}^1\)-a.e. \(r\in [0, +\infty )\). By (5.16), it follows that \({\mathsf {z}}'(r) = 1\) for \(\mathscr {L}^1\)-a.e. \(r\in [0, T_\star (u))\), which implies \({\mathsf {z}}(t) = t\) for all \(t\in [0, T_\star (u))\). This shows that \(u = v\) and the claim is proved.

Now, we prove that (i) implies (ii). Let u be a minimal solution, with \(T_\star (u)\in (0, +\infty ]\). Let

$$\begin{aligned} \Omega \ := \ \{t\in (0, T_\star (u)) \ : \ g(u(t)) > 0\}. \end{aligned}$$

As g is lower semicontinuous (5.6), the set \(\Omega\) is open.

We define \({\mathsf {x}}: [0, T_\star (u)) \rightarrow [0, +\infty )\) as

$$\begin{aligned} {\mathsf {x}}(t) := \int _0^t{|u'|(r) \ dr}. \end{aligned}$$

The map \({\mathsf {x}}\) is locally absolutely continuous; further, it is strictly increasing since u is injective in \([0, T_\star (u))\) by Theorem 3.9, (2). Let

$$\begin{aligned} X:= \lim _{t\uparrow T_\star (u)}{\mathsf {x}}(t) = \int _0^{T_\star (u)}{|u'|(r) \ dr} \in (0, +\infty ]. \end{aligned}$$

There exists a strictly increasing, continuous inverse \({\mathsf {y}}: [0, X) \rightarrow [0, T_\star (u)),\)

$$\begin{aligned} {\mathsf {y}}({\mathsf {x}}(t)) = t \quad \text {for all } t\in [0, T_\star (u)), \quad {\mathsf {x}}({\mathsf {y}}({\mathrm x})) = {\mathrm x}\quad \text {for all } {\mathrm x}\in [0, X). \end{aligned}$$

Since \({\mathsf {y}}\) is monotone, it is differentiable \(\mathscr {L}^1\)-a.e. in [0, X) and its derivative \({\mathsf {y}}'\) belongs to \(L^1(0, X')\) for every \(X' < X\). We define \(\vartheta : [0, X) \rightarrow [0, +\infty )\),

$$\begin{aligned} \vartheta (\mathrm x):= \int _0^{\mathrm x}{{\mathsf {y}}'(r) \ dr}. \end{aligned}$$

The chain rule for absolutely continuous functions (see e.g. [17], Thm. 3.44) applied to \({\mathsf {x}}\circ {\mathsf {y}}\) yields \({\mathsf {y}}'(r) > 0\) for \(\mathscr {L}^1\)-a.e. \(r\in (0, X)\). So it holds that

$$\begin{aligned} 0< \vartheta ({\mathrm x}_2) - \vartheta ({\mathrm x}_1) \le {\mathsf {y}}({\mathrm x}_2) - {\mathsf {y}}({\mathrm x}_1) \quad \text {for all } 0\le {\mathrm x}_1< {\mathrm x}_2 < X, \end{aligned}$$

and the map \({\mathsf {z}}: [0, T_\star (u)) \rightarrow [0, +\infty )\), defined as \({\mathsf {z}}:= \vartheta \circ {\mathsf {x}}\), is strictly increasing and 1-Lipschitz, i.e.

$$\begin{aligned} 0< {\mathsf {z}}(t_2) - {\mathsf {z}}(t_1) \le t_2 - t_1 \quad \text {for all } 0\le t_1< t_2 < T_\star (u). \end{aligned}$$

The chain rule for absolutely continuous functions cannot be directly applied to \({\mathsf {y}}\circ {\mathsf {x}}\) since we do not know whether \({\mathsf {y}}\) is absolutely continuous or not, but imitating the proof of [17, Thm. 3.44], we obtain

$$\begin{aligned} {\mathsf {y}}'({\mathsf {x}}(t)) {\mathsf {x}}'(t) \ = \ 1 \quad \text { a.e. in } \Omega. \end{aligned}$$

We used (5.18). By the chain rule, now applied to \(\vartheta \circ {\mathsf {x}}\), it follows that

$$\begin{aligned} {\mathsf {z}}'(t) = 1 \quad \text { a.e. in } \Omega, \quad {\mathsf {z}}'(t) = 0 \quad \text { a.e. in } [0, T_\star (u))\setminus \Omega. \end{aligned}$$
(5.20)

Let

$$\begin{aligned} \theta \ := \ \lim _{{\mathrm x}\uparrow X} \vartheta ({\mathrm x}) \ = \ \int _0^X{{\mathsf {y}}'(r) \ dr} \in (0, +\infty ]. \end{aligned}$$

The map \({\mathsf {z}}\) has a strictly increasing, continuous inverse \({\mathsf {t}}: [0, \theta ) \rightarrow [0, T_\star (u))\).

We define \(w: [0, \theta ) \rightarrow \mathscr {S}, \ w:= u\circ {\mathsf {t}}\). It holds that

$$\begin{aligned} d(w(s), w(t)) \ \le \ \int _{{\mathsf {t}}(s)}^{{\mathsf {t}}(t)}{|u'|(r) \ dr} \ = \ {\mathsf {x}}({\mathsf {t}}(t)) - {\mathsf {x}}({\mathsf {t}}(s)) \end{aligned}$$

for all \(0\le s \le t < \theta\). Obviously, \({\mathsf {x}}\circ {\mathsf {t}}\) is the inverse map of \(\vartheta\). Since \(\vartheta\) is locally absolutely continuous with \(\vartheta '(r) = y'(r) > 0\) a.e. in (0, X), its inverse \({\mathsf {x}}\circ {\mathsf {t}}\) is locally absolutely continuous. By change of variables (see e.g. [17], Thm. 3.54), we obtain

$$\begin{aligned} {\mathsf {x}}({\mathsf {t}}(t)) - {\mathsf {x}}({\mathsf {t}}(s)) \ = \ \int _s^t{|u'|({\mathsf {t}}(r)) {\mathsf {t}}'(r) \ dr} \quad 0\le s \le t < \theta. \end{aligned}$$

It follows that \(w\in AC_{\text {loc}}([0, \theta ); \mathscr {S})\), i.e. the metric derivative

$$\begin{aligned} |w'|(t):= \mathop {\lim }_{s\rightarrow t} \frac{d(w(s),w(t))}{|s-t|} \end{aligned}$$

exists for \(\mathscr {L}^1\)-a.e. \(t\in (0, \theta )\), the function \(t\mapsto |w'|(t)\) belongs to \(L^1_{\text {loc}}(0, \theta )\) and

$$\begin{aligned} d(w(s),w(t)) \le \int ^{t}_{s}{|w'|(r) \ dr} \quad \text {for all } 0 \le s\le t < \theta ; \end{aligned}$$

moreover, it holds that

$$\begin{aligned} |w'|(r) \le |u'|({\mathsf {t}}(r)){\mathsf {t}}'(r) \quad \text {a.e. in } (0, \theta ) \end{aligned}$$
(5.21)

(cf. Definition 5.1, [2, Def. 1.1.1 and Thm. 1.1.2]). Applying the chain rule for absolutely continuous functions to \({\mathsf {z}}\circ {\mathsf {t}}\), we obtain by (5.20) that

$$\begin{aligned} {\mathsf {t}}'(r) \ = \ 1 \quad \text {a.e. in } {\mathsf {z}}(\Omega ). \end{aligned}$$
(5.22)

We note that the map

$$\begin{aligned}{}[0, \theta )\ni s \mapsto \int _0^{{\mathsf {t}}(s)}{g^q(u(r)) \ dr} \end{aligned}$$

is strictly increasing and applying the chain rule for absolutely continuous functions, we obtain

$$\begin{aligned} \int _{{\mathsf {t}}(s_1)}^{{\mathsf {t}}(s_2)}{g^q(u(r)) \ dr} \ \ge \ \int _{s_1}^{s_2}{g^q(u({\mathsf {t}}(r))) {\mathsf {t}}'(r) \ dr}. \end{aligned}$$
(5.23)

Similarly,

$$\begin{aligned} \int _{{\mathsf {t}}(s_1)}^{{\mathsf {t}}(s_2)}{|u'|^p(r) \ dr} \ \ge \ \int _{s_1}^{s_2}{|u'|^p({\mathsf {t}}(r)) {\mathsf {t}}'(r) \ dr}. \end{aligned}$$
(5.24)

The curve u satisfies the energy dissipation inequality (5.2). Hence, combining (5.21)–(5.24), we obtain

$$\begin{aligned} \phi (w(s)) - \phi (w(t)) \ \ge \ \frac{1}{q}\int _s^t{g^q(w(r)) \ dr} \ + \ \frac{1}{p}\int _s^t{|w'|^p(r) \ dr} \end{aligned}$$

for all \(0\le s \le t < \theta\).

If \(\theta < +\infty\) and \(T_\star (u) = +\infty\), it holds that \(w([0, \theta )) = \mathscr {R}[u]\) and for every \(T \in (0, \theta )\), the map \(w_T: [0, +\infty ) \rightarrow \mathscr {S}\),

$$\begin{aligned} w_T(s):= {\left\{ \begin{array}{ll} w(s) &{}\text { if } 0\le s \le T \\ u(s - T + {\mathsf {t}}(T)) &{}\text { if } s > T \end{array}\right. } \end{aligned}$$

belongs to \(\mathscr {U}_p(\phi, g)\) (cf. the proof of Theorem 5.8, (H2)). Since \(\mathscr {U}_p(\phi, g)\) satisfies (H4), it follows that the limit \(\lim _{t\uparrow +\infty } u(t) =: u_\star \in \mathscr {S}\) exists, and \(\bar{w}\in \mathscr {U}_p(\phi, g)\), where

$$\begin{aligned} \bar{w}(t):= {\left\{ \begin{array}{ll} w(t) &{}\text { if } 0\le t < \theta \\ u_\star &{}\text { if } t\ge \theta \end{array}\right. } \end{aligned}$$

Moreover, it holds that \(\bar{w}({\mathsf {z}}(t)) = u(t)\) for all \(t\in [0, +\infty )\). Hence, \(u\succ \bar{w}\) which implies \(u = \bar{w}\) since u is minimal. We obtain

$$\begin{aligned} {\mathsf {z}}(t) \ = \ t \quad \text {for all } t\in [0, T_\star (u)) \end{aligned}$$
(5.25)

as u is injective in \([0, T_\star (u))\).

If \(T_\star (u) < +\infty\), then \(\theta < +\infty\), and it is not difficult to see that \(\bar{w}\) defined as above belongs to \(\mathscr {U}_p(\phi, g)\). Extending \({\mathsf {z}}\) by the constant value \(\theta\), we again obtain \(u\succ \bar{w}\), and thus (5.25).

If \(\theta = +\infty\) and \(T_\star (u) = +\infty\), then \(w\in \mathscr {U}_p(\phi, g)\) and \(u\succ w\), from which (5.25) follows.

So in any case, (5.25) holds. Taking into account (5.20), we may conclude that

$$\begin{aligned} \mathscr {L}^1([0, T_\star (u))\setminus \Omega ) = 0. \end{aligned}$$

This means that u satisfies (5.16). The proof is complete. \(\square\)

The strict monotonicity of \(\phi\) along minimal solutions and (5.16) Every minimal solution u is injective in \([0, T_\star (u))\) due to Theorem 3.9, (2). The functional \(\phi\) is injective on \(\mathscr {R}[u]\) for every \(u\in \mathscr {U}_p(\phi, g)\) (Remark 5.10). It follows that \(\phi\) is strictly decreasing along minimal solutions, i.e.

$$\begin{aligned} \phi (u(t))< \phi (u(s)) \quad \text { for all } 0\le s< t < T_\star (u), \quad u\in \mathscr {U}_{p, \text {min}}(\phi, g), \end{aligned}$$
(5.26)

where \(\mathscr {U}_{p,\text {min}}(\phi, g)\) denotes the collection of all the minimal solutions in \(\mathscr {U}_p(\phi, g)\).

We note that (5.26) is not sufficient to conclude that a solution is minimal. In [13, Appendix A], we give an example of a one-dimensional gradient flow to a function whose derivative has a Cantor-like 0 level set \(K\subset \mathbb {R}\), and we construct a solution parametrized by a positive finite Cantor measure concentrated on K; this solution satisfies (5.26) but does not satisfy (5.16) and is not minimal. The example illustrates that condition (5.16) is stronger than (5.26) and that the strict monotonicity of the functional along a solution curve \(u\in \mathscr {U}_p(\phi, g)\) does not guarantee that \(u\in \mathscr {U}_{p,\text {min}}(\phi, g)\).

5.4 An example in the space of probability measures

Let \(p\in (2,+\infty )\). We illustrate Theorems 5.8 and 5.9 and Proposition 5.11 with an example in the space \(\mathcal {P}_p(\mathbb {R}^d)\) of Borel probability measures with finite moments of order p (i.e. \(\int _{\mathbb {R}^d}{|x|^p\mathrm {d} \mu } < +\infty\)). The space \(\mathcal {P}_p(\mathbb {R}^d)\) is endowed with the p-Wasserstein distance \(\mathcal {W}_p\),

$$\begin{aligned} \mathcal {W}_p(\mu _1, \mu _2)^p := \min _{\gamma \in \Gamma (\mu _1,\mu _2)}\int _{\mathbb {R}^d\times \mathbb {R}^d}{|x-y|^p\mathrm {d}\gamma }, \ \ \ \mu _i\in \mathcal {P}_p(\mathbb {R}^d), \end{aligned}$$

with \(\Gamma (\mu _1, \mu _2)\) being the set of Borel probability measures on \(\mathbb {R}^d\times \mathbb {R}^d\) whose first and second marginal coincide with \(\mu _1\) and \(\mu _2\) respectively (see e.g. [25, 26] for a detailed account of the theory of Optimal Transport and Wasserstein distances). The functional \(\phi : \mathcal {P}_p(\mathbb {R}^d)\rightarrow (-\infty, +\infty ]\),

$$\begin{aligned} \phi (\mu ):={\left\{ \begin{array}{ll} \int _{\mathbb {R}^d\times \mathbb {R}^d}{[F(u(x)) + V(x)u(x)+\frac{1}{2}W(x-y)u(x)]u(y)\mathrm {d}x\mathrm {d}y} &{} \text {if } \mu =u\mathcal {L}^d, \\ +\infty &{} \text {else}\end{array}\right. } \end{aligned}$$

is defined on \((\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\), with

  1. (A1)

    \(F:[0,+\infty )\rightarrow \mathbb {R}\) being convex, continuous with \(F(0)=0\), differentiable in \((0,+\infty )\), bounded from below by \(s\mapsto -Cs^\alpha\) for some \(\alpha>\frac{d}{d+p}, C>0,\) having superlinear growth and satisfying

    $$\begin{aligned} s\mapsto s^dF(s^{-d}) \ \ \text { is convex nonincreasing in } (0,+\infty ) \end{aligned}$$

    and

    $$\begin{aligned} \exists C_F > 0: \ \ F(r+s)\le C_F(1+F(r)+F(s)) \ \ \text { for all } r,s \ge 0, \end{aligned}$$
  2. (A2)

    \(V:\mathbb {R}^d\rightarrow \mathbb {R}\) being nonnegative and \(\lambda\)-convex for some \(\lambda < 0\),

  3. (A3)

    \(W: \mathbb {R}^d\rightarrow \mathbb {R}\) being convex, nonnegative, differentiable, even and satisfying

    $$\begin{aligned} \exists C_W>0: \ \ W(x+y)\le C_W(1+W(x) + W(y)) \ \ \text { for all } x,y\in \mathbb {R}^d. \end{aligned}$$

Such setting forms a typical example considered in the study of gradient flows in the space of probability measures \({\text {having possibly nonunique solutions}}\) (see e.g. Chapters 10.4 and 11 in [2] and the references therein). It allows of the following points to be proved:

  • \(\phi\) \(\text {has a lower bound }\) (5.7) \(\text {of order } p\) in \((\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\) (cf. the proof of Prop. 4.1 in [16]).

  • \(\phi\) \(\text {is lower semicontinuous }\) (5.6) \(\text { in } (\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\) (see e.g. the proof of Lem. 3.3 in [14], Dunford–Pettis theorem and Thm. 7.12 in [25]).

  • \(\phi\) is \(\lambda\)-convex along constant speed geodesics in \((\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\) (cf. [25, Thm. 5.15] and [2, Props. 9.3.2, 9.3.5, 9.3.9).

  • The relaxed slope \(|\partial ^-\phi |\) is a strong upper gradient; it coincides with the local slope \(|\partial \phi |\) and is \(\text {lower semicontinuous }\) (5.6) \(\text { in } (\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\) (cf. [2], Cor. 2.4.10, Prop. 10.4.14).

  • The family \(\mathscr {U}_p(\phi, |\partial ^-\phi |)\) which coincides with the collection of all the p-curves of maximal slope for \(\phi\) with respect to \(|\partial ^-\phi |\) (cf. Remark 5.7) is nonempty (cf. [2, Prop. 2.2.3, Thm. 2.3.3]). The proof is based on the notion of minimizing movements [8].

  • An equivalent characterization of a solution \((u(t,\cdot )\mathcal {L}^d)_{t\ge 0} \in \mathscr {U}_p(\phi, |\partial ^-\phi |)\) is given by a weak formulation of the diffusion equation

    $$\begin{aligned} \partial _t u -\nabla \cdot \Big (uj_q\Big (\frac{\nabla L_F(u)}{u} + \nabla V + (\nabla W) \star u\Big )\Big ) \ = \ 0, \end{aligned}$$

    with q the conjugate exponent of p,

    $$\begin{aligned} j_q(v) := {\left\{ \begin{array}{ll} |v|^{q-2}v &{} \text { if } v\ne 0, \\ 0 &{} \text { if } v = 0 \end{array}\right. } \end{aligned}$$

    and

    $$\begin{aligned} L_F(s):= {\left\{ \begin{array}{ll} sF'(s) - F(s) &{} \text { if } s\in (0,+\infty ), \\ 0 &{} \text { if } s=0 \end{array}\right. } \end{aligned}$$

    (cf. [2, Lem. 10.3.8, Thms. 10.4.13, 11.1.3]). The interpretation of dynamics governed by such diffusion equation as a p-gradient flow for the energy functional \(\phi\) in the space \((\mathcal {P}_p(\mathbb {R}^d), \mathcal {W}_p)\) has its origin in the papers [15, 16, 20, 21].

  • If the relaxed slope is finite at \(\mu =u\mathcal {L}^d\in \mathcal {P}_p(\mathbb {R}^d)\), then it can be computed as

    $$\begin{aligned} |\partial ^-\phi |(\mu ) \ = \ \Big \Vert \frac{\nabla L_F(u)}{u} + \nabla V + (\nabla W) \star u\Big \Vert _{L^q(\mu, \mathbb {R}^d)} \end{aligned}$$

    with \(L_F(u)\in W_{\text {loc}}^{1,1}(\mathbb {R}^d)\) (cf. [2, Thms. 10.3.11, 10.4.13, Prop. 10.4.14]).

We may refer the reader to the corresponding literature mentioned above for a detailed analysis of the role that each single assumption made in (A1), (A2) and (A3) plays.

There may be more than one solution in \(\mathscr {U}_p(\phi,|\partial ^-\phi |)\) corresponding to given initial data. However, the above statements show that Theorems 5.8 and 5.9 and Proposition 5.11 are applicable under (A1), (A2) and (A3), so that there exists a unique minimal solution corresponding to each given range generating all other solutions with the same range by time reparametrization (1.1) and the collection \(\mathscr {U}_{p, \text {min}}(\phi,|\partial ^-\phi |)\) of all the minimal solutions coincides with the collection of all the solutions \((u(t,\cdot )\mathcal {L}^d)_{t\ge 0}\in \mathscr {U}_p(\phi,|\partial ^-\phi |)\) crossing the set

$$\begin{aligned} \Big \{\bar{u}\mathcal {L}^d\in \mathcal {P}_p(\mathbb {R}^d): L_F(\bar{u})\in W^{1,1}_{\text {loc}}(\mathbb {R}^d), \ \frac{\nabla L_F(\bar{u})}{\bar{u}} + \nabla V + (\nabla W)\star \bar{u} = 0 \ \ (\bar{u}\mathcal {L}^d)\text {-a.e.}\Big \} \end{aligned}$$

only in an \(\mathcal {L}^1\)-negligible set of times before \(T_\star (u)\).