1 Introduction

In this paper we consider two types of generalized gradient systems \((X,{{\mathcal {F}}},{{\mathcal {R}}})\) in the sense of [12], which are both given in terms of a Hilbert space X, an energy functional \({{\mathcal {F}}}\), and a dissipation structure \({{\mathcal {R}}}\), such that the induced evolution equation takes the form \(0 \in \partial {{\mathcal {R}}}(\dot{u}(t)) + \partial {{\mathcal {F}}}(t,u(t))\). The two systems are

  • the Hilbert-space gradient system (GS) \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\) and

  • the energetic rate-independent system (ERIS) \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) with \({{\mathcal {E}}}(t,u)= t {{\mathcal {J}}}(u)\).

Here X is a Hilbert space with duality pairing \(\langle \cdot ,\cdot \rangle \), norm \(\Vert \cdot \Vert \), and Riesz isomorphism \(E:X\rightarrow X^*\). The link between these two system arises from the fact that we assume that \({{\mathcal {J}}}:X\rightarrow [0,\infty ]\) is positively homogeneous of degree 1, i.e. \({{\mathcal {J}}}(\lambda u)=\lambda {{\mathcal {J}}}(u)\) for all \(\lambda > 0\) and \(u\in X\). Moreover, \({{\mathcal {J}}}\) is convex, lower semicontinuous, and has a dense domain dom\(({{\mathcal {J}}}):= \big \{\, u\in X \, \big | \, {{\mathcal {J}}}(u)<\infty \,\big \} \).

The gradient-flow equation for the gradient system \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\) takes the form

$$\begin{aligned} 0 \in E w'(s) + \partial {{\mathcal {J}}}(w(s)) \quad \text {for a.a. }s>0, \qquad w(0)=u^0. \end{aligned}$$
(1.1)

We continue to use the letter \(s\ge 0\) for the time in the gradient-flow equation, while \(t\ge 0\) will be reserved for the time in the ERIS. On the formal level, the evolution equation induced by the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) can be written in the analogous form

$$\begin{aligned} 0 \in \mathop {{\mathrm {Sign}}}(\dot{u}(t)) + \partial _u {{\mathcal {E}}}(t,u(t)) \quad \text {for a.a. } t>0, \qquad u(0)=u^0 , \end{aligned}$$
(1.2)

where \(\mathop {{\mathrm {Sign}}}(v)\subset X^*\) denotes the Hilbert-space signum function, which is the convex sub-differential of the norm \(\Vert \cdot \Vert \), namely

$$\begin{aligned} \mathop {{\mathrm {Sign}}}(v) = \partial (\Vert \cdot \Vert )(v) = \left\{ \begin{array}{ll} \big \{\, \xi \in X^* \, \big | \, \Vert \xi \Vert _*\le 1 \,\big \} &{} \text {for } v=0, \\ \displaystyle \frac{1}{\Vert v\Vert } Ev &{} \text {for } v\ne 0. \end{array}\right. \end{aligned}$$

Recalling \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\), we find \(\partial _u {{\mathcal {E}}}(t,u)= t \partial {{\mathcal {J}}}(u)\) and see on the formal level that the gradient-flow equation (1.1) and the rate-independent evolution (1.2) are equivalent up to a time reparametrization. Indeed, assuming that \(u:{[0,\infty [} \rightarrow X\) is a sufficiently smooth solution of rate-independent evolution (1.2), we define a reparametrization \(s=S(t)\) via

$$\begin{aligned} S(t)= \int _0^t \tau \Vert \dot{u}(\tau )\Vert \;\!{\mathrm {d}}\tau \end{aligned}$$

and assume further that we can invert the relation to obtain \(t=T(s)\). Then, the chain rule shows that w defined via \(w(s)=u(T(s))\) is a solution of the gradient-flow equation (1.1). Vice versa, if a sufficiently smooth solution w of (1.1) is given, we define the reparametrization \(t=T(s)\) via

$$\begin{aligned} T(s)= \frac{1}{\Vert w'(s)\Vert } \end{aligned}$$
(1.3)

and assume that the inversion \(s=S(t)\) exists, then \(u(t)=w(S(t))\) solves (1.2).

Because of the 1-homogeneity of \({{\mathcal {J}}}\) and the simple structure of the dissipation in terms of the norm \(\Vert \cdot \Vert \), both systems have a scaling invariance. For all \(\lambda >0\) we have the implications:

$$\begin{aligned} w \text { solves }(1.1) \quad&\Longrightarrow \quad w_\lambda : s \mapsto \frac{1}{\lambda }\,w(\lambda s) \text { solves } (1.1), \end{aligned}$$
(1.4a)
$$\begin{aligned} u \text { solves }(1.2) \quad&\Longrightarrow \quad \widetilde{u}_\lambda : s \mapsto \ \ u (\lambda s) \ \ \text { solves } (1.2). \end{aligned}$$
(1.4b)

As for linear equations \(w'=-Lw\), where the existence of an eigenpair \((\varphi ,\lambda )\) of L, i.e. \(L\varphi =\lambda \varphi \) leads to the explicit solutions \(w(s)= c{\mathrm {e}}^{-\lambda s} \varphi \), we obtain explicit solutions from nontrivial solutions of the relation \(\partial ^0{{\mathcal {J}}}(\psi )=\lambda E\psi \), where \(\partial ^0{{\mathcal {J}}}(u)\) denotes the unique element in \(\partial {{\mathcal {J}}}(u)\) with minimal norm \(\vert \cdot \vert \). For all \(\rho >0\) we find that

$$\begin{aligned}&w(s)=\max \{\rho {-}\lambda s, 0\}\, \psi \ \text { solves }(1.1), \qquad \text { and} \end{aligned}$$
(1.5a)
$$\begin{aligned}&u(t)= \varvec{1}_{[0,t_*]}(t) \,\rho \psi \ \text { solves }(1.2) \quad \text { (in the sense of} (1.6)), \end{aligned}$$
(1.5b)

where \(t_*=1/\Vert \lambda \psi \Vert \) and \(\varvec{1}_A(t)=1\) for \(t\in A\) and 0 otherwise.

The purpose of this work is to make these formal observations rigorous, thus relating the two generalized gradient systems in a mathematically precise way.

To indicate one of the difficulties, we observe that for (1.3) the mapping \(s\mapsto \Vert w'(s)\Vert \) should be monotonous, which is a standard feature for Hilbert-space gradient flows with convex potentials (cf. [7, Thm. 3.1(6)]), but as the solution in (1.5a) shows, we cannot expect strict monotonicity. This is indeed related to the fact that the solutions of the rate-independent evolution (1.2) are not even continuous, see (1.5b) for an example. Hence, it is necessary to replace the differential formulation (1.2) by a derivative-free one, which is available for ERIS. We refer to [11, 14] for different solution concepts of rate-independent systems.

For our purposes, the concept of energetic solutions for the ERIS as introduced in [10, 16] will be appropriate as it allows for jumps. We call \(u:{[0,\infty [} \rightarrow X\) an energetic solution for \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\), if \(t \mapsto \partial _t {{\mathcal {E}}}(t,u(t))\) lies in \(\mathrm {L}^1_{\mathrm {loc}}({[0,\infty [})\) and for all \(t\ge 0\) we have the global stability (S) and the energy balance (E):

$$\begin{aligned} \begin{aligned} \text {(S)}\qquad&\forall \,\widetilde{u}\in X:\quad {{\mathcal {E}}}(t,\widetilde{u})\le {{\mathcal {E}}}(t,u(t))+ \Vert \widetilde{u}{-}u(t)\Vert ; \\ \text {(E)}\qquad&{{\mathcal {E}}}(t,u(t)) + \text {Var}_{\Vert \cdot \Vert }(u,[0,t]) = {{\mathcal {E}}}(0,u(0)) + \int _0^t \partial _s {{\mathcal {E}}}(s,u(s))\;\!{\mathrm {d}}s, \end{aligned} \end{aligned}$$
(1.6)

where \(\text {Var}_{\Vert \cdot \Vert }(u,[s,t])= \sup \{\, \sum _{j=1}^N \Vert u(t_j){-}u(t_{j-1}) \Vert \, | \, N\in {{\mathbb {N}}},\ s\le t_0<t_1< \cdots < t_N\le t \,\} \).

Because of the convexity of \(u\mapsto {{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) it is obvious to construct approximate solutions via the minimizing-movement scheme, i.e. by choosing a time step \(h>0\) and defining \(u^{k}_h\) as the minimizer of \(u \mapsto \Vert u{-}u^{k-1}_h\Vert + {{\mathcal {E}}}(k h, u)\). Under the additional assumption that \({{\mathcal {J}}}\) has compact sublevels in X, it is then standard (cf. [10, 14]) to show that solutions can be obtained as accumulation points of these approximations, see also Proposition 4.6. However, without these compactness assumption the existence of solutions is largely open, except for the case that \({{\mathcal {E}}}(t,\cdot )\) is quadratic. Even worse, uniqueness can only be shown in situations where \({{\mathcal {E}}}(t,\cdot )\) has a Lipschitz-continuous second derivative, see [5, 13, 15]. Thus, it is surprising that the ERIS with \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) as considered here provides a model class, where we can show both, (i) existence of solutions without assuming compactness and (ii) uniqueness of solutions.

Here uniqueness holds up to the choice of the jump behavior. Because of the finiteness of the variation \(\text {Var}_{\Vert \cdot \Vert }(u,[0,t])\) it is clear that at all times \(t\ge 0\) the right and the left limits \(u(t^+):=\lim _{\tau \rightarrow t^+} u(\tau )\) and \(u(t^-):= \lim _{\tau \rightarrow t^-} u(\tau )\) exist. If u is an energetic solution, we can always modify u such that \(u(t)=u(t^+)\) or \(u(t)=u(t^-)\), and we still have an energetic solution. Indeed, for our convex case we may even set \(u(t)=(1{-}\theta )u(t^+)+ \theta u(t^-)\) for any \(\theta \in [0,1]\). Thus, uniqueness holds only if we prescribe the jump behavior, e.g. by asking left continuity, i.e. \(u(t)=u(t^-)\) for all t.

As a consequence of the transfer between the gradient system, for which the classical results of Brézis [7, Thm. 3.1+3.2] provide existence and uniqueness, we obtain the following result for the ERIS.

Theorem 1.1

The ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) with \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) possesses for all initial values \(u^0\in X\) with \({{\mathcal {J}}}(u^0)<\infty \) a unique left-continuous energetic solution \(u:{[0,\infty [} \rightarrow X\) in the sense of (4.1).

For cases with \({{\mathcal {J}}}(u^0)=\infty \) we still have \({{\mathcal {E}}}(0,u^0)=0\), but there is a delicate issue about attainment of the initial condition discussed in Proposition 4.4 and Remark 4.5.

The plan of the paper is as follows: In Sect. 2 we recall the relevant, classical facts from the gradient-flow theory developed in [7, Thm. 3.1+3.2]. Moreover, we discuss the case that \(s \rightarrow \Vert w'(s)\Vert \) has a plateau and show that this implies that the solution must follow a straight line.

In Sect. 3 we consider several examples, first a few simple finite-dimensional ones. Then, we provide an infinite-dimensional example in \(\mathrm {L}^2({{\mathbb {R}}})\) where all solutions can be calculated explicitly and where we are able to choose an initial value \(u^0\) such that \(\int _{t=0}^1 \Vert \dot{u}(t)\Vert \;\!{\mathrm {d}}t =\infty \), i.e. the right limit \(u(0^+)=\lim _{\tau \rightarrow 0^+}u(\tau )\) does not exist. Finally, we shortly refer to the so-called “total-variation flow” that is the main motivation for the study of gradient systems with one-homogeneous energy. Motivated by questions in image denoising, one considers \(X= \mathrm {L}^2(\Omega )\) and \({{\mathcal {J}}}(w)=\int _\Omega |\nabla w| \;\!{\mathrm {d}}x\), see [1, 3, 6]. Also there, solutions with constant velocity \(w'(s)=v_*\) play an important role, e.g. in the form \(w(t,x)= \max \{ 1{-}\lambda _* t, 0 \} \varvec{1}_\omega \), where \(\omega \) is a suitable subset of \(\Omega \), see [3].

In Sect. 5 we discuss how solutions w of the gradient-flow equation (1.1) generate energetic solutions u, whereas Sect. 6 provides the opposite direction and concludes with the proof of Theorem 1.1.

2 The Gradient Flow

The theory developed in [7, Thm. 3.1+3.2] can be applied to the gradient-flow equation

$$\begin{aligned} 0\in E w'(s)+ \partial {{\mathcal {J}}}(w(s)), \qquad w(0)=w_0\in X \end{aligned}$$
(2.1)

induced by the GS \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\). Since \({{\mathcal {J}}}\) is non-negative, lower semicontinuous, convex, and has a dense domain, the induced semiflow \(({{\mathfrak {T}}}_s)_{s\ge 0}\) is a strongly continuous contraction semigroup on all of X, i.e.

$$\begin{aligned} \begin{aligned}&{{\mathfrak {T}}}_r \circ {{\mathfrak {T}}}_s = {{\mathfrak {T}}}_{r+s} \text { for } r,s\ge 0;\\&s \mapsto w(s)={{\mathfrak {T}}}_s (u^0) \text { is a strongly } \text { continuous solution of }(2.1) \text { with } w(0)=u^0,\\&\Vert {{\mathfrak {T}}}_s(w_1)-{{\mathfrak {T}}}_s(w_0)\Vert \le \Vert w_1{-}w_0\Vert \quad \text { for all } w_0,w_1\in X. \end{aligned} \end{aligned}$$
(2.2)

In particular, we have existence and uniqueness for the initial value problem (2.1). Every solution satisfies the energy-dissipation balance

$$\begin{aligned} {{\mathcal {J}}}(w(s_2)) + \int _{s_1}^{s_2} \Vert w'(s)\Vert ^2 \;\!{\mathrm {d}}s = {{\mathcal {J}}}(w(s_1)) \text { for } 0<s_1< s_2, \end{aligned}$$
(2.3)

such that \(w \in \mathrm {H}^1({[s_1,\infty [};X)\) for all \(s_1>0\). In particular, \(w'(s)\) exists for a.a. \(s \ge 0\). The last relation in (2.2) applied to \(w_0=w(0)\) and \(w_1=w(h)={{\mathfrak {T}}}_h(w(0))\) for \(h>0\), shows that every solution \(w(s)={{\mathfrak {T}}}_s(w(0))\) satisfies

$$\begin{aligned} \Vert w(s_2{+}h) - w(s_2)\Vert \le \Vert w(s_1{+}h) - w(s_1)\Vert \text { for all } h>0 \text { and } 0\le s_1<s_2. \end{aligned}$$

Dividing by h and taking \(h\rightarrow 0^+\), we find \( \Vert w'(s_2)\Vert \le \Vert w'(s_1)\Vert \) for a.a. \(0<s_1<s_2\). According to [7, Thm. 3.1 (5)+(6)], the one-sided derivative from the right behaves even better:

$$\begin{aligned}&w'_+(s):= \lim _{h\rightarrow 0^+} \frac{1}{h} \big (w(s{+}h) - w(s)\big ) \quad \text {exists for all } s> 0. \end{aligned}$$
(2.4a)
$$\begin{aligned}&s \mapsto w'_+(s) \text { is continuous from the right}, \end{aligned}$$
(2.4b)
$$\begin{aligned}&s\mapsto \Vert w'_+(s)\Vert \text { is non-increasing and continuous from the right}. \end{aligned}$$
(2.4c)

Since \(w'(s)\) exists a.e., we have \(w'(s)=w'_+(s)\) for almost all \(s> 0\).

Moreover, it is shown in [2] that (2.1) is equivalent to the “Evolutionary Variational Inequality” (EVI), which here takes the form

$$\begin{aligned} \frac{1}{2}\Vert w(s){-}v\Vert ^2 - \frac{1}{2}\Vert w(r){-}v\Vert ^2 \le (s{-}r) \big ( {{\mathcal {J}}}(v) - {{\mathcal {J}}}(w(s))\big ) \text { for all }v\in X \text { and } 0\le r< s.\nonumber \\ \end{aligned}$$
(2.5)

Setting \(r=0\) and \(v=0\) and using \({{\mathcal {J}}}(0)=0\) we find the a priori estimate

$$\begin{aligned} {{\mathcal {J}}}(w(s)) \le \frac{1}{2s}\Vert w(0)\Vert ^2 \text { for } s>0. \end{aligned}$$

Inserting this into the energy-dissipation estimate and using \({{\mathcal {J}}}\ge 0\) we find

$$\begin{aligned} \frac{1}{s} \Vert w(0)\Vert ^2 \ge {{\mathcal {J}}}(w(s/2))\ge \int _{s/2}^{s} \Vert w'(\sigma )\Vert ^2 \;\!{\mathrm {d}}\sigma \ge \frac{s}{2} \Vert w_+'(s)\Vert ^2 . \end{aligned}$$

Thus we conclude the a priori estimate

$$\begin{aligned} \Vert w'_+(s)\Vert \le \frac{\sqrt{2}}{s} \Vert w(0)\Vert \quad \text {for all } s>0. \end{aligned}$$
(2.6)

So far, we have only used the convexity of \({{\mathcal {J}}}\) and \({{\mathcal {J}}}(w)\ge {{\mathcal {J}}}(0)=0\). The coming results rely on the assumption that \({{\mathcal {J}}}\) is positively 1-homogeneous.

The first result is relevant because the graph of the 1-homogeneous functional \({{\mathcal {J}}}\) contains segments like the rays \( \{\, \alpha v \, | \, \alpha \ge 0 \,\} \). However, there may be even more segments if the sublevel \( \{\, w \in X \, | \, {{\mathcal {J}}}(w)\le 1 \,\} \) is not strictly convex. The result characterizes the case that speed mapping \(s\mapsto \Vert w'_+(s)\Vert \) is constant on an interval \({[s_1,s_2[}\). Using the strict convexity of the Hilbert-space norm \(\Vert \cdot \Vert \), we conclude that w restricted to the interval \([s_1,s_2]\) must be affine. This property will be crucial for relating the solutions of the GS \((X,{{\mathcal {J}}},\Vert \cdot \Vert )\) to the solutions of the ERIS.

Proposition 2.1

(Intervals of constant speed) Assume that \(w:{[0,\infty [}\rightarrow X\) is a solution of (2.1) and that \(\Vert w'_+(s)\Vert = \mathop {{\mathrm {const}}}\) for \( s \in {]s_1,s_2[}\). Then, \(\displaystyle w(s)=\frac{s_2{-}s}{s_2{-}s_1} w(s_1)+ \frac{s{-}s_1}{s_2{-}s_1} w(s_2)\) for all \(s\in [s_1,s_2]\), i.e. \(w|_{[s_1,s_2]}\) is affine.

Proof

We let \(\phi =\Vert w'_+(s)\Vert \) be the constant. Obviously, only the case \(\phi >0\) is interesting.

We consider rs with \(s_1 \le r < s \le s_2\). On the one hand, the energy-dissipation balance (2.3) gives

$$\begin{aligned} {{\mathcal {J}}}(w(s)) + ( s{-}r ) \phi ^2 = {{\mathcal {J}}}(w(r)). \end{aligned}$$
(2.7)

One the other hand, the equation gives \(-E w'(s)=\eta (s) \in \partial {{\mathcal {J}}}(w(s))\) for a.a. \(s\ge 0\). Hence, we have \(\Vert \eta (r)\Vert =\phi \) for a.a. \(r\in [s_1,s_2]\). Thus, convexity of \({{\mathcal {J}}}\) gives the lower estimate

$$\begin{aligned} {{\mathcal {J}}}(w(s)) \ge {{\mathcal {J}}}(w(r)) + \langle \eta (r),w(s){-}w(r)\rangle \ge {{\mathcal {J}}}(w(r)) - \Vert \eta (r)\Vert \Vert w(s){-}w(r)\Vert . \end{aligned}$$

Together with (2.7), this implies

$$\begin{aligned} \phi \Vert w(s){-}w(r)\Vert \ge {{\mathcal {J}}}(w(r)) - {{\mathcal {J}}}(w(s)) = (s{-}r) \phi ^2. \end{aligned}$$

Combining this with the trivial upper bound

$$\begin{aligned} \Vert w(s)- w(r)\Vert \le \int _r^s \Vert w'_+(\sigma )\Vert \;\!{\mathrm {d}}\sigma \le (s{-}r) \phi , \end{aligned}$$

we conclude \(\Vert w(s){-}w(r)\Vert = (s{-}r) \phi \), which implies that \(w_{[s_1,s_2]}\) is a geodesic curve in the Hilbert space \((X,\Vert \cdot \Vert )\), which implies that it is a straight line. \(\square \)

We continue with some auxiliary result for the solutions of the gradient system that will be needed later.

Proposition 2.2

(Norm and energy decay) Assume that \(w:{[0,\infty [} \rightarrow X\) is a solution of (1.1). Then, \(s\mapsto \Vert w(s)\Vert \) is non-increasing. More precisely, we have

$$\begin{aligned} \frac{1}{2} \Vert w(r)\Vert ^2 + \int _s^r {{\mathcal {J}}}(w(\sigma ))\;\!{\mathrm {d}}\sigma = \frac{1}{2}\Vert w(s)\Vert ^2 \text { for } 0< s <r. \end{aligned}$$
(2.8)

In particular, the energy \(s\mapsto {{\mathcal {J}}}(w(s))\) is integrable with

$$\begin{aligned} \int _0^\infty {{\mathcal {J}}}(w(s)) \;\!{\mathrm {d}}s \le \frac{1}{2} \Vert w(0)\Vert ^2, \end{aligned}$$
(2.9)

which is non-trivial for \(s\approx 0\) as well as for \(s\rightarrow \infty \).

Proof

Since \({{\mathcal {J}}}\) is 1-homogeneous, we have \(\langle \eta , w\rangle ={{\mathcal {J}}}(w)\) for all \(\eta \in \partial {{\mathcal {J}}}(w)\). Hence, we may multiply (1.1) by w(s) to obtain

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s} \frac{1}{2}\Vert w(s)\Vert ^2 = \langle w(s),w'(s)\rangle = - {{\mathcal {J}}}(w(s)) \le 0. \end{aligned}$$

Thus, for every solution w of (1.1) and \(0<s_1<s_2\) we have

$$\begin{aligned} \int _{s_1}^{s_2} {{\mathcal {J}}}(w(s)) \;\!{\mathrm {d}}s = -\int _{s_1}^{s_2} \frac{\mathrm {d}}{\mathrm {d}s} \frac{1}{2}\Vert w(s)\Vert ^2 \;\!{\mathrm {d}}s = \frac{1}{2} \Vert w(s_1)\Vert ^2 - \frac{1}{2} \Vert w(s_2)\Vert ^2, \end{aligned}$$

which is the desired result. \(\square \)

The previous result can be seen as a special case of the “inverse energy-dissipation balance” also used in [8, Lem. 4.1]. For quadratic dissipation potentials \({{\mathcal {R}}}(v)=\frac{1}{2}\Vert v\Vert ^2\) in a Hilbert space and general convex functionals \({{\mathcal {J}}}\), the gradient-flow equation \(0=\mathrm {D}{{\mathcal {R}}}(w')+\partial {{\mathcal {J}}}(w)\) can also rewritten as

$$\begin{aligned} {{\mathcal {R}}}(w(s)) + \int _r^s \big \{{{\mathcal {J}}}(w(\sigma )) +{{\mathcal {J}}}^*({-}\mathrm {D}{{\mathcal {R}}}(w'(\sigma ) ) \big \} \;\!{\mathrm {d}}\sigma = {{\mathcal {R}}}(w(r)) \text { for all } s>r\ge 0. \end{aligned}$$

In our special case of the 1-homogeneous \({{\mathcal {J}}}\) the Legendre-Fenchel dual \({{\mathcal {J}}}^*\) satisfies \({{\mathcal {J}}}^*(\xi )=0\) for \(\xi \in \partial {{\mathcal {J}}}(0)\) and \({{\mathcal {J}}}^*(\xi )=\infty \) otherwise . Hence, \({{\mathcal {J}}}^*\) doesn’t show up in (2.8).

We conclude this section by an estimate on the extinction time \(S_{\mathrm {extinct}}(u^0)\), i.e. the solution w with \(w(0)=u^0\) satisfies \(w(s)=0 \) for \(s\ge S_{\mathrm {extinct}}(u^0)\).

Proposition 2.3

(Extinction) Assume that for the GS \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\) we additionally have

$$\begin{aligned} {{\mathcal {J}}}\text { is 1-homogeneous and } \quad \exists \,\beta >0\ \forall \, u\in X: \ {{\mathcal {J}}}(u) \ge \beta \Vert u\Vert . \end{aligned}$$
(2.10)

Then, the solution w with \(w(0)=u^0\) satisfies \(\Vert w(s)\Vert \le \max \big \{ 0, \Vert u^0\Vert {-}\beta s\big \}\), which implies \(S_{\mathrm {extinct}}(u^0)\le \Vert u^0\Vert /\beta \).

Proof

We simply estimate the norm via \(\frac{1}{2}\frac{\mathrm {d}}{\mathrm {d}s} \Vert w(s)\Vert ^2 = \langle Ew',w\rangle =- {{\mathcal {J}}}(w) \le -\beta \Vert w(s)\Vert \). Setting \(\rho (s)=\Vert w(s)\Vert \ge 0\), this means \(\rho (\rho '{-}\beta )\le 0\), which implies the desired result. \(\square \)

3 Some Examples

We first consider three examples with \(X={{\mathbb {R}}}^2\) equipped with the Euclidean norm and then a simple case on \(X=\mathrm {L}^2(\Omega )\) that can be solved explicitly.

3.1 Piecewise Affine Example

We consider \({{\mathcal {J}}}(u)=\max \{|u_1|,2|u_2|\}\). For the subdifferential of \({{\mathcal {J}}}\) we find

$$\begin{aligned} \partial {{\mathcal {J}}}(u)= \left\{ \begin{array}{ll} \{ (\mathop {{\mathrm {sign}}}(u_1),0)^\top \}&{}\text {for }|u_1|>2|u_2|,\\ \{ (0,2\mathop {{\mathrm {sign}}}(u_2))^\top \}&{}\text {for }|u_1|<2|u_2|,\\ \big \{\, (\mathop {{\mathrm {sign}}}(u_1)(1{-}\theta ),2\mathop {{\mathrm {sign}}}(u_2)\theta )^\top \, \big | \, \theta \in [0,1] \,\big \} &{} \text {for }|u_1|=2|u_2|\ne 0,\\ \big \{\, (\eta _1,\eta _2)^\top \, \big | \, 2|\eta _1|{+}|\eta _2|=2 \,\big \} &{} \text {for }u=0. \end{array} \right. \end{aligned}$$

For the case \(|u_1|=2|u_2|\ne 0\), the minimal element \(\partial ^0{{\mathcal {J}}}(u)\) of \(\partial {{\mathcal {J}}}(u)\) takes the form \( \partial ^0{{\mathcal {J}}}(u)= \frac{1}{5} (4\mathop {{\mathrm {sign}}}(u_1),2\mathop {{\mathrm {sign}}}(u_2))^\top \), and all other cases are trivial.

Since \(\partial ^0{{\mathcal {J}}}\) only takes finitely many values, the solutions are easily constructed by straight lines in \({{\mathbb {R}}}^2\). Without loss of generality we consider the case \(w(0)=u^0=(u_1^0,u_2^0)\) with \(0<2u_2^0< u_1^0\), then the explicit solution reads

$$\begin{aligned} w(s) = \left\{ \begin{array}{ll} u^0 - (s,0)^\top &{} \text {for } s \in [0,u_1^0{-} 2u_2^0], \\ \frac{2u_1^0{+}u_2^0{-}2s}{5} \,(2,1)^\top &{} \text {for }s \in [u_1^0{-} 2u_2^0,u_1^0{+}u_2^0/2], \\ 0 &{} \text {for }s\ge u_1^0{+}u_2^0/2. \end{array} \right. \end{aligned}$$

We see that \(w'_+\) only takes three values, namely \(w'_+(s)\in \{(1,0)^\top , (4/5,2/5)^\top ,(0,0)^\top \}\). Moreover, every solution reaches \(w=0\) in finite time, namely \(S_\text {extinct}(u^0)=|u_1^0|+|u_2^0|/2\), see also Fig. 1.

Fig. 1
figure 1

Gradient flow in \({{\mathbb {R}}}^2\) for \({{\mathcal {J}}}(u)=\max \{|u_1|,2|u_2|\}\): the level sets of \({{\mathcal {J}}}\) are indicated in gray, and a few orbits are drawn in blue (color figure online)

3.2 Singular Potential

We again consider the Hilbert space \({{\mathbb {R}}}^2\) with the Euclidean norm. The potential is

$$\begin{aligned} {{\mathcal {J}}}(u)= \frac{|u_1|^{\alpha +1}}{u_2^\alpha } \text { for } u_2>0, \quad {{\mathcal {J}}}(0)=0,\quad \text { and }{{\mathcal {J}}}(u)=\infty \text { otherwise}, \end{aligned}$$

where the exponent \(\alpha \) satisfies \(\alpha \ge 1\), such that \({{\mathcal {J}}}\) is indeed convex and lower semicontinuous. The space X is now the closure of the domain of \({{\mathcal {J}}}\), namely \(X={{\mathbb {R}}}\times {[0,\infty [} \subset {{\mathbb {R}}}^2\).

The point is that we are able to characterize the solutions explicitly. Using the gradient-flow equation

$$\begin{aligned} w'_1 = - (\alpha {+}1) \frac{|w_1|^{\alpha -1}w_1}{w_2^{\alpha }} , \qquad w'_2= \alpha \frac{|w_1|^{\alpha +1}}{w_2^{\alpha + 1} } \end{aligned}$$

we easily see that the function

$$\begin{aligned} \Phi _\alpha (w)= \frac{\alpha }{\alpha +1} \, w_1^2 + w_2^2 \end{aligned}$$

is constant along solutions. All solutions have \(w_1 w'_1\ge 0\) and \(w'_2 \ge 0\), see Fig. 2.

Fig. 2
figure 2

Solutions (right) and level sets of potential \({{\mathcal {J}}}\) (left) for \(\alpha =2\)

The first observation concerns the limit \(\lim _{s\rightarrow \infty } w(s)= \lim _{t\rightarrow \infty }u(t)\) for the unique solution starting in \(u^0=(u_1^0,u_2^0)\) with \(u_2^0\ge 0\). Using the first integral \(\Phi \) and the signs of \(w'_j\), we find that the the solution converges to limit point

$$\begin{aligned} {{\mathfrak {L}}}(u^0):= \lim _{s\rightarrow \infty } w(s)= \lim _{t\rightarrow \infty }u(t) \ = \ \Big (\, 0\, , \sqrt{\Phi _\alpha (u^0)}\Big ). \end{aligned}$$

As expected, the mapping \(u^0\mapsto \mathfrak L(u^0)\) is 1-homogeneous, but otherwise it is nonlinear.

We may also analyze the decay properties of w(s) for \(s\rightarrow \infty \) or of u(t) for \(t\rightarrow \infty \). Restricting to the level set \(\Phi (w(s)) = \Phi (u^0)\), it is sufficient to solve an ODE for \(w_1\) of the form \(w'_1(s)=F_{u^0}(w_1)\), where \(F_{u^0}(w_1)=- c_{u^0}|w_1|^{\alpha -1} w_1 +O(|w_1|^{\alpha +1})\). Hence, in the case \(\alpha >1\) we find algebraic decay of the form

$$\begin{aligned} w(s)- {{\mathfrak {L}}}(u^0) \approx c s^{-1/(\alpha -1)} \text { for } s \rightarrow \infty . \end{aligned}$$

Using \(t(s)=\Vert w'(s)\Vert \), we hence find \(u(t)-{{\mathfrak {L}}}(u^0)\approx c t^{-1/\alpha }\).

The case \(\alpha =1\) is special, since w(s) convergence exponentially to \({{\mathfrak {L}}}(u^0)\), while \(u(t) - {{\mathfrak {L}}}(u^0) \approx c /t\).

It is also interesting to analyze the behavior of solutions starting with \({{\mathcal {J}}}(u^0)=\infty \), i.e. \(u^0=(\beta ,0)\) with \(\beta \ne 0\). Using the first integral \(\Phi \) again, we can now write an ODE for \(w_2\), namely

$$\begin{aligned} w'_2 = G_{u^0}(w_2) = c_{\alpha ,u^0}w_2^{-(\alpha +1)} + O(w_2^{-\alpha }) \text { for } w_2\rightarrow 0. \end{aligned}$$

Thus, we find \(w(s)- u^0 \approx c s^{1/(\alpha +2)}\) and thus

$$\begin{aligned} \Vert w'(s)\Vert \approx s^{-(\alpha +1)/(\alpha +2)}\quad \text { and } \quad {{\mathcal {J}}}(w(s)) \approx s^{-\alpha /(\alpha +2)}\quad \text { for }s\rightarrow 0. \end{aligned}$$

Note that \(\Vert w'(\cdot )\Vert \) and \({{\mathcal {J}}}(w(\cdot ))\) are integrable near \(s=0\), while \(s\mapsto \Vert w'(s)\Vert ^2\) is not.

For the energetic solution we find using \(t(s)=1/\Vert w'(s)\Vert \approx s^{(\alpha +1)/(\alpha +2)} \) the relations

$$\begin{aligned} u(t)-u^0\approx t^{1/(\alpha +1)},\quad \Vert \dot{u}(t)\Vert \approx t^{-\alpha /(\alpha +1)} , \quad {{\mathcal {J}}}(u(t))\approx t^{-\alpha /(\alpha +1)}. \end{aligned}$$

In particular, we conclude that \(t\mapsto {{\mathcal {E}}}(t,u(t))=t\,{{\mathcal {J}}}(u(t))\) is continuous and \(\Vert \dot{u}\Vert \) and \({{\mathcal {J}}}(u(\cdot ))\) are integrable.

3.3 Degenerate Potential

The functional \({{\mathcal {J}}}(u)=u_1+\Vert u\Vert =\sqrt{u_1^2{+}u_2^2}-u_1\) has the property that \({{\mathcal {J}}}(u)=0\) if and only if \(u\in \{\, (a,0) \, | \, a\ge 0 \,\} \). For \(w(0)=(a,0)\) we have \(w(s)=(a,0)\) if \(a\ge 0\) and \(w(s)=(\min \{a{+}2s,0\}, 0)\) for \(a <0\).

Moreover, it is easy to see that \(M(u)=\sqrt{u_1^2{+}u_2^2}+u_1\) is conserved along solutions. Hence, the orbits lie on the level sets \(M(u)=m\ge 0\), which are parabolas, and satisfy \(u_1(t)=(m^2{-}u_2(t)^2)/(2m)\), see Fig. 3. In particular, a solution starting in \(u^0 \in {{\mathbb {R}}}^2\) satisfies \(u(t)\rightarrow \big (\frac{1}{2}M(u^0),0\big )\) for \(t \rightarrow \infty \), where the convergence is exponential for \(u^0_2\ne 0\).

Fig. 3
figure 3

Gradient flow in \({{\mathbb {R}}}^2\) for \({{\mathcal {J}}}(u)=\Vert u\Vert -u_1\). Level sets of \({{\mathcal {J}}}(\cdot )=c \in \{0.1,0.3,0.5,1,1.5,2\}\) are shown in gray, and the orbits are drawn in blue (color figure online)

3.4 An Infinite Dimensional Example

We consider the Hilbert space \(X= \mathrm {L}^2({{\mathbb {R}}})\) and the 1-homogeneous functional \({{\mathcal {J}}}(u)= \int _{{\mathbb {R}}}a(x) |u(x)| \;\!{\mathrm {d}}x \) for some non-negative and measurable function \(a: {{\mathbb {R}}}\rightarrow {[0,\infty [}\). We can give the subdifferential \(\partial {{\mathcal {J}}}\) and the minimal elements \(\partial ^0{{\mathcal {J}}}(u)\) explicitly via

$$\begin{aligned} \partial {{\mathcal {J}}}(u)&= \big \{\, \xi \in \mathrm {L}^2({{\mathbb {R}}}) \, \big | \, \xi (x) \in a(x)\mathop {{\mathrm {Sign}}}(u(x)) \text { a.e. } \,\big \} \text { and } \partial ^0{{\mathcal {J}}}(u)= a(x)\mathop {{\mathrm {sign}}}(u(x)). \end{aligned}$$
(3.1)

The associated gradient-flow equation reads

$$\begin{aligned} 0 \in w'(s,x) + a(x) \mathop { {\mathrm {Sign}}}(w(s,x)), \qquad w(0,x)=u^0(x). \end{aligned}$$

We easily see that the solutions are given by the explicit formula

$$\begin{aligned} w(s,x) =\mathop {{\mathrm {Sign}}}(u^0(x))\,\max \big \{ \,|u^0(x)| - a(x)s\,, \;0 \, \big \} \end{aligned}$$

for the solutions w(s). For the time derivative we obtain the formula

$$\begin{aligned} w'(s,x)=- a(x) \mathop {{\mathrm {Sign}}}(u^0(x))\,\varvec{1}_{\{x : |u^0(x)| - a(x)s>0\}} (x) \end{aligned}$$

and see that \(w'(s)\) can be constant on an interval \({]s_1,s_2[}\) if the image of the function \(x \mapsto |u^0(x)|/a(x)\) intersected with \({]s_1,s_2[}\) has Lebesgue measure 0.

To see one typical behavior for \(s\approx 0\) and \(s\gg 1\) we look at the simple case

$$\begin{aligned} a \equiv 1 \qquad \text {and} \qquad u^0(x) =\min \{|x|^{-\alpha }, |x|^{-\beta }\} \ \text { with } 0<\alpha< 1/2< \beta <1. \end{aligned}$$

Since \(u^0\) is even and strictly decreasing for \(x>0\), it is easily seen that the support of \(w(s,\cdot )\) is given by \([-X(s),X(s)]\) with \(X(s)= \min \{ s^{-1/\alpha }, s^{-1/\beta }\}\) and

$$\begin{aligned} w(s,x)= \big (u^0(x)-u^0(X(s))\big ) \varvec{1}_{\{|x|\le X(s)\}}(x) . \end{aligned}$$

For \(s\rightarrow 0\) we obtain \({{\mathcal {J}}}(w(s))=\frac{2\beta }{1-\beta } s^{-(1-\beta )/\beta } + O(1)\), while for \(s\ge 1\) we have \(J(w(s))=\frac{2\alpha }{1-\alpha } s^{-(1-\alpha )/\alpha } \), which is compatible with \(\int _0^\infty \mathcal {J} ( w (s) ) \, \mathrm {d} s < \infty \), cf. (2.9).

For the velocity and the slope we have \(\Vert w'(s)\Vert = \Vert \partial ^0{{\mathcal {J}}}(w(s))\Vert _*= (2X(s))^{1/2}\), which is compatible with \(\Vert w'(s)\Vert \le C/s\), cf. (2.6). Moreover, by choosing \(u^0\) suitably, we can find a support mapping \(s\mapsto X(s)\) such that one or both of the integrals \(\int _0^1\Vert w'(s)\Vert \;\!{\mathrm {d}}s\) and \(\int _1^\infty \Vert w'(s)\Vert \;\!{\mathrm {d}}s\) are infinite.

3.5 Total Variation Flow

An important motivation for the present work is the so-called total variation flow, which plays an important role in image processing. We refer to [3, 4, 6, 9, 18] or the monograph [1] and the references therein.

According to [18] the denoising of an image f given over a domain \(\Omega \subset {{\mathbb {R}}}^2\) can be done by considering the \(\mathrm {L}^2\) gradient flow for the convex functional

$$\begin{aligned} \widetilde{{\mathcal {J}}}: u \mapsto \int _\Omega \big \{ |\nabla u| + \frac{\kappa }{p} | u{-}f|^p \big \} \;\!{\mathrm {d}}x\,. \end{aligned}$$

This leads to the parabolic equation

$$\begin{aligned} w'= \mathop {{\mathrm {div}}}\nolimits \Big ( \frac{1}{|\nabla u|}\,\nabla u \Big ) - \kappa |u {-}f|^{p-2} (u{-}f) \qquad \text { in } \Omega , \end{aligned}$$

which has to be completed by no-flux boundary conditions and interpreted in a suitable weak sense.

Obviously, our theory only applies in the case \(f\equiv 0\) and \(\kappa =0\) or \(\kappa >0\) and \(p=1\). The former case is also relevant in crystal growth, see [17, Eqn. (61)] and [9]. The latter work addresses in particular the one-dimensional case and shows that facets are preserved. As a consequence for \(\Omega ={{\mathbb {R}}}\), the class of step functions is invariant under the gradient flow. Choosing points \(y_0<y_1< \ldots < y_N\) and setting

$$\begin{aligned} w(s,x) = \sum _{i=1}^N \alpha _i(s) \varvec{1}_{{]y_{i-1},y_i]}}(x) , \end{aligned}$$

we can reduce the \(\mathrm {L}^2\) norm and the total variation functional to obtain

$$\begin{aligned} \Vert \varvec{\alpha }(s)\Vert _{{\mathbf {y}}}&:= \Vert w(s)\Vert _{\mathrm{L^2}} = \Big ( \sum _{i=1}^N \alpha _i(s)^2 (y_i{-}y_{i-1})\Big )^{1/2}, \\ {{\mathbf {J}}}(\varvec{\alpha }(s))&:={{\mathcal {J}}}(w(s))= \int _{{\mathbb {R}}}|\partial _x w(s,x)| \;\!{\mathrm {d}}x =|\alpha _1(s)| + |\alpha _N(s)| + \sum _{i=2}^{N} \big | \alpha _i - \alpha _{i-1}\big |. \end{aligned}$$

Thus, the evolution of the vector \(\varvec{\alpha }(s)=(\alpha _i(s))_i \in {{\mathbb {R}}}^N\) is indeed determined by the finite-dimensional gradient-flow equation for the gradient system \(({{\mathbb {R}}}^N,{{\mathbf {J}}},\Vert \cdot \Vert _{{\mathbf {y}}})\). It takes the form

$$\begin{aligned} (y_i{-}y_{i-1})\,\alpha _i'(s) \in - \mathop {{\mathrm {Sign}}}\big ( \alpha _i(s){-}\alpha _{i-1}(s)\big ) - \mathop {{\mathrm {Sign}}}\big ( \alpha _i(s){-}\alpha _{i+1}(s)\big ) \ \text { for }i=1,...,N,\nonumber \\ \end{aligned}$$
(3.2)

where we set \(\alpha _0(s)=\alpha _{N+1}(s)=0\) and use the set-valued function \(\mathop {{\mathrm {Sign}}}\) with \(\mathop {{\mathrm {Sign}}}(0)=[-1,1]\). We refer to [6, 9] for illustrative examples.

4 Energetic Solutions

Energetic solutions are defined by stability and by the energy balance in (1.6). We refer to [10, 14] for an introduction and a more extensive theory, respectively. For definiteness we rewrite the definition of energetic solutions for our special ERIS with \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\). We call a mapping \(u:{[0,\infty [} \rightarrow X\) an energetic solution for the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\), if \(t\mapsto {{\mathcal {J}}}(u(t))\) lies in \(\mathrm {L}^1_{\mathrm {loc}}({[0,\infty [})\) and if for all \(r,t\ge 0\) with \(r<t\) we have

$$\begin{aligned} \text {(S)}\quad&\forall \, \widetilde{u}\in X: \quad t{{\mathcal {J}}}(u(t)) \le t{{\mathcal {J}}}(\widetilde{u})+ \Vert \widetilde{u}{-}u\Vert ; \end{aligned}$$
(4.1a)
$$\begin{aligned} \text {(E)}\quad&t{{\mathcal {J}}}(u(t)) + {\mathrm {Var}}_{\Vert \cdot \Vert } (u;[r,t]) = r{{\mathcal {J}}}(u(r)) +\int _r^t {{\mathcal {J}}}(u(\tau )) \;\!{\mathrm {d}}\tau ; \end{aligned}$$
(4.1b)
$$\begin{aligned} \text {(I)}\quad&u(0)=u(0^+):=\lim _{\tau \rightarrow 0} u(\tau ) \ \text { in the norm topology}. \end{aligned}$$
(4.1c)

Note that we ask the energy balance (E) on all compact subintervals [rt] of \({[0,\infty [}\), whereas it is usual to impose it only on [0, T] for all \(T>0\). However, we have a singular situation at \(t=0\), because of \({{\mathcal {E}}}(0,u)=0{{\mathcal {J}}}(u)=0\).

Here it is important that \(\int _0^t {{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau <\infty \) implies that \({\mathrm {Var}}_{\Vert \cdot \Vert } (u;[0,t])<\infty \) for all \(t>0\), see Proposition 4.4. This means that the limit from the right \(u(0^+)=\lim _{\tau \rightarrow 0^+} u(\tau )\) exists and the attainment of the initial condition in (I) is well-defined.

4.1 Preliminaries on ERIS

In general, the stability condition (S) in (1.6) is best formulated via the sets of stable states

$$\begin{aligned} {{\mathcal {S}}}(t):= \big \{\, u\in X \, \big | \, \forall \,v\in X: \ {{\mathcal {E}}}(t,u)\le {{\mathcal {E}}}(t,v)+ \Vert v{-}u\Vert \,\big \} . \end{aligned}$$

For (S) in (4.1a) we use the convexity of \({{\mathcal {J}}}\) and \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) to find

$$\begin{aligned} {{\mathcal {S}}}(t)= \big \{\, u\in X \, \big | \, t\partial {{\mathcal {J}}}(u) \in B_1(0)\subset X^* \,\big \} = \big \{\, u\in X \, \big | \, \Vert \partial ^0{{\mathcal {J}}}(u)\Vert _* \le 1/t \,\big \} . \end{aligned}$$

Hence, for \(t>0\) the stability of u implies that \(u \in {\mathrm {dom}}(\partial {{\mathcal {J}}})\). We obviously have

$$\begin{aligned} X={{\mathcal {S}}}(0) \ \supset \ {{\mathcal {S}}}(t_1)\ \supset \ {{\mathcal {S}}}(t_2) \ \supset \ \{\, u \in X \, | \, {{\mathcal {J}}}(u)=0 \,\} \quad \text { for } 0\le t_1< t_2. \end{aligned}$$
(4.2)

Moreover, each \({{\mathcal {S}}}(t)\) is a cone, i.e. \(\lambda \ge 0\) and \(u \in {{\mathcal {S}}}(t)\) implies \(\lambda u \in {{\mathcal {S}}}(t)\). However, in general these sets are not convex and not weakly closed. Indeed for the example \({{\mathcal {J}}}(u)=\max \{|u_1|,2|u_2|\}\) from Sect. 3.1 we have

$$\begin{aligned} {{\mathcal {S}}}(t)=\left\{ \begin{array}{ll}{{\mathbb {R}}}^2&{} \text {for }t \in [0, 1/2],\\ \big \{\, (u_1,u_2) \, \big | \, |u_1|\ge 2|u_2| \,\big \} &{} \text {for } t \in {]1/2,1]},\\ \big \{\, (u_1,u_2) \, \big | \, |u_1|= 2|u_2| \,\big \} &{} \text {for } t \in {]1,2/\sqrt{5}]},\\ \{(0,0)\}&{} \text {for } t > 2/\sqrt{5}. \end{array} \right. \end{aligned}$$
(4.3)

However, the stability sets are strongly closed, which follows easily from the lower semicontinuity of \({{\mathcal {J}}}\), see also [10, Prop. 5.9].

Lemma 4.1

(Strong closedness of stability sets) For a sequence \((t_n,u_n)\in {[0,\infty [} \times X\) we have

$$\begin{aligned} \Big ( u_n\in {{\mathcal {S}}}(t_n) \ \text { and }\ (t_n,u_n)\rightarrow (t,u)\Big )\quad \Longrightarrow \quad u\in {{\mathcal {S}}}(t). \end{aligned}$$
(4.4)

The following example, which is the rate-independent analog of the GS studied in Sect. 3.4, provides a non-trivial case, in which we are able to show that the case \({\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,1])=\infty \) may actually occur, which implies that the limit \(u(0^+)=\lim _{\tau \rightarrow 0^+}u(\tau )\) does not exist and the attainment of the initial condition \(u(0)=u(0^+)\) doesn’t make sense.

Example 4.2

(Case with \(\int _0^1{{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau =\infty \)) We return to the example treated in Sect. 3.4 with \(X=\mathrm {L}^2({{\mathbb {R}}})\) and \({{\mathcal {J}}}(u)=\int _{{\mathbb {R}}}|u(x)|\;\!{\mathrm {d}}x\) (i.e. \(a\equiv 1\)). Starting with a non-negative, even \(u^0\), such that \(u^0|_{{]0,\infty [}}\) is differentiable and strictly decreasing, we obtain the energetic solution

$$\begin{aligned} u(t,x)= \max \{ 0,\, u^0(x) - S(t)\} \quad \text {with } S(t)=u^0(X(t)) \text { and } X(t)=1/(2t^2). \end{aligned}$$

To understand the construction, consider the special case \(u^0(x)=(2|x|)^{-\beta }\) for \(|x|\ge 1\) with \(\beta >1/2\) to have \(u^0\in \mathrm {L}^2({{\mathbb {R}}})\). Then \(S(t)=t^{2\beta }\).

To show that u is an energetic solution we observe that \(\partial ^0{{\mathcal {J}}}(u)\) given in (3.1) reads

$$\begin{aligned} \partial ^0{{\mathcal {J}}}(u(t))(x)=\varvec{1}_{[-X(t),X(t)]}(x) \quad \text {giving } \Vert \partial ^0{{\mathcal {J}}}(u(t))\Vert _*= \big (2X(t)\big )^{1/2}= 1/t, \end{aligned}$$

which implies \(u(t)\in {{\mathcal {S}}}(t)\) and (S) in (4.1) is satisfied.

Moreover, \(t \mapsto u(t)\) is differentiable with

$$\begin{aligned} \dot{u}(t,\cdot ) = -\dot{S}(t) \varvec{1}_{[-X(t),X(t)]} \quad \text {giving }\Vert \dot{u}(t)\Vert = |\dot{S}(t)|\big (2X(t)\big )^{1/2} = |\dot{S}(t)|/t . \end{aligned}$$

Hence, on the one hand the variation can be calculated to obtain

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[r,t])&= \int _r^t \Vert \dot{u}(\tau )\Vert \;\!{\mathrm {d}}\tau = \int _r^t \frac{\dot{S}(t)}{t}\;\!{\mathrm {d}}t = \frac{S(t)}{t} - \frac{S(r)}{r} + \int _r^t \frac{S(\tau )}{\tau ^2} \;\!{\mathrm {d}}\tau . \end{aligned}$$

On the other hand, the functional \({{\mathcal {J}}}\) can be evaluated explicitly via

$$\begin{aligned} {{\mathcal {J}}}(u(t))&= 2 \int _0^{X(t)}\!\big \{u^0(x)-S(t)\big \} \;\!{\mathrm {d}}x \ \overset{x=X(\tau )}{=} \ 2\int _t^\infty \!\big \{ u^0(X(\tau )) - u^0(X(t))\big \} |\dot{X}(\tau )| \;\!{\mathrm {d}}\tau \\&= 2\int _t^\infty \frac{S(\tau )-S(t)}{\tau ^3} \;\!{\mathrm {d}}\tau = 2\int _t^\infty \frac{S(\tau )}{\tau ^3} \;\!{\mathrm {d}}\tau - \frac{S(t)}{t^2}. \end{aligned}$$

With this, the energy balance (E) in (4.1) follows by a straightforward calculation.

For this case we can construct an example where \(\int _0^1 {{\mathcal {J}}}(u(t))\;\!{\mathrm {d}}t =\infty \) and hence \({\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,1])=\infty \) as well. In particular, \(\lim _{\tau \rightarrow 0^+} u(\tau )\) does not exist. To obtain such a case choose \(u^0\) with \(u^0(x)= |x|^{-1/2} (\log |x|)^{-\beta }\) for \(|x|\ge x_*\gg 1\) and \(u^0(x)=0\) otherwise. Then \(u^0\in \mathrm {L}^2({{\mathbb {R}}})\) for \(\beta >1/2\). For \(t\rightarrow 0\) we obtain \(S(t) \approx c_0 t (\log (1/t))^{-\beta }\) and \(\dot{S}(t) = c_1 (\log (1/t))^{-\beta }\). We conclude

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,1]) = \int _0^1 \frac{\dot{S}(t)}{t} \;\!{\mathrm {d}}t \approx \int _0^t \frac{\;\!{\mathrm {d}}t}{t\,(\log (1/t))^\beta } = \infty \quad \text {for }\beta \le 1. \end{aligned}$$

Thus, for \(\beta \in {]1/2,1]}\) we obtain a case where \(\lim _{\tau \rightarrow 0^+} u(\tau )\) does not exist.

4.2 Decay of \(\pmb {{{\mathcal {J}}}}\) and Continuity at \(t=0\)

Without any further knowledge on the energetic solutions, one can show that \(t\mapsto {{\mathcal {J}}}(u(t))\) in non-increasing. We emphasize that for general energetic solutions we allow \({{\mathcal {J}}}(u^0)=\infty \), but enforce via (E) in (4.1b) the integrability condition \(\int _0^1 {{\mathcal {J}}}(u(t))\;\!{\mathrm {d}}t< \infty \). This will then imply the continuity \(u(t)\rightarrow u(0)\) for \(t\rightarrow 0^+\). See Example 4.2 for a case with \(\int _0^1{{\mathcal {J}}}(u(t))\;\!{\mathrm {d}}t=\infty \), where it is unclear in what sense the limit \(u^0\) is attained because the right limit \(u(0^+)\) does not exist with respect to the norm topology.

Lemma 4.3

(Decay of \({{\mathcal {J}}}\) along u) Let u be any solution of the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\). Then, we have

$$\begin{aligned} \frac{1}{r} \Vert u(r)\Vert \ge {{\mathcal {J}}}(u(r))\ge {{\mathcal {J}}}(u(t)) \text { for } 0< r < t . \end{aligned}$$

Proof

Since the dissipation is non-negative we have the energy estimate

$$\begin{aligned} e(t):={{\mathcal {E}}}(t,u(t))=t\,{{\mathcal {J}}}(u(t)) \le r\,{{\mathcal {J}}}(u(r)) + \int _r^t {{\mathcal {J}}}(u(\tau )) \;\!{\mathrm {d}}\tau = e(r)+ \int _r^t \frac{1}{\tau }e(\tau )\;\!{\mathrm {d}}\tau . \end{aligned}$$

Applying Grönwall’s estimate to e we obtain \(e(t)\le (t/r) e(r)\) which means the second estimate \({{\mathcal {J}}}(u( t))\le {{\mathcal {J}}}(r))\).

For the first estimate we simply use stability of u(r) and test with \(v=0\), namely

$$\begin{aligned} r\,{{\mathcal {J}}}(u(r)) \le r{{\mathcal {J}}}(0)+\Vert 0{-}u(r)\Vert = 0 + \Vert u(r)\Vert . \end{aligned}$$

This gives the first estimate in the assertion. \(\square \)

With this we now show the continuity of the energetic solutions u as defined in (4.1).

Proposition 4.4

(Continuity at \(t=0\)) The \(u:{[0,\infty [} \rightarrow X\) be an energetic solution in the sense of (4.1), i.e. in particular \(\int _0^1 {{\mathcal {J}}}(u(t))\;\!{\mathrm {d}}t <\infty \).

Then, u has a right \(u(0^+):=\lim _{\tau \rightarrow 0^+} u(\tau )\) in the norm sense, and we have

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,t]) \le \int _0^t{{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau <\infty \ \text { for all } t>0. \end{aligned}$$
(4.5)

Proof

Using \({{\mathcal {J}}}(u(t))\ge 0\), the energy balance (E) gives

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[r,t]) \le r{{\mathcal {J}}}(u(r)) + \int _r^t {{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau \le \int _0^t {{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau < \infty , \end{aligned}$$

where we used the monotonicity \({{\mathcal {J}}}(u(\tau ))\ge {{\mathcal {J}}}(u(r))\) for \(\tau \in {]0,r[}\). Since \(0<r<t\) were arbitrary, we have

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u;{]0,t]})= \lim _{r\rightarrow 0^+}{\mathrm {Var}}_{\Vert \cdot \Vert }(u;[r,t]) \le \int _0^t {{\mathcal {J}}}(u(\tau ))\;\!{\mathrm {d}}\tau < \infty , \end{aligned}$$

which implies that \(u(0^+)\) exists. Now using the initial condition \(u(0)=u(0^+)\) we have \({\mathrm {Var}}_{\Vert \cdot \Vert }(u;{]0,t]}) = {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,t])\) and the result is established. \(\square \)

Remark 4.5

(Attainment of the initial condition) It would be highly desirable to define energetic solutions also for cases where \({\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,1])=0\). One possible way would be to replace the energy balance (E) by the corresponding balance on subintervals [rt] as follows.

$$\begin{aligned} \widetilde{\text {(E)}}\quad&t{{\mathcal {J}}}(u(t)) + {\mathrm {Var}}_{\Vert \cdot \Vert } (u;[r,t]) = r{{\mathcal {J}}}(u(r))+\int _r^t {{\mathcal {J}}}(u(\tau )) \;\!{\mathrm {d}}\tau \quad \text { for } 0 \lneqq r<t. \end{aligned}$$

However, we still need a relation to couple the solution \(u:{]0,\infty [}\rightarrow X\) to its initial condition \(u^0\), which could be done by defining the variational interpolants

$$\begin{aligned} \widetilde{u}(t) = \mathrm {arg\,min} \big \{\, \Vert \widetilde{u} -u^0\Vert + t{{\mathcal {J}}}(\widetilde{u}) \, \big | \, \widetilde{u} \in X \,\big \} \end{aligned}$$

and asking

$$\begin{aligned} \widetilde{\text {(I)}}\quad&\Vert u(t)-\widetilde{u}(t)\Vert \rightarrow 0 \quad \text {for } t\rightarrow 0^+. \end{aligned}$$

It is an open question whether this option provides a good definition leading to existence and uniqueness.

4.3 Existence Theory for ERIS

A standard method of constructing energetic solutions is the method of incremental minimization (also known as minimizing movement scheme). Choosing a time step \(h>0\) we use the discrete times \(t_k:=kh\) and define the approximate solutions via

$$\begin{aligned} u^k_h :=\mathrm {arg\,min} \big \{\, \Vert u {-} u^{k-1}_h \Vert + {{\mathcal {E}}}(kh, u)\Vert \, \big | \, u \in X \,\big \} , \end{aligned}$$

where \(u^0_h=u^0\). By convexity and lower semicontinuity of \({{\mathcal {E}}}(kh,\cdot ) = kh{{\mathcal {J}}}(\cdot )\) and the strict convexity of the norm, we obtain a unique minimizer in each step and can thus construct the piecewise constant interpolant

$$\begin{aligned} \overline{u}_h:{[0,\infty [}\rightarrow X \quad \text {with }\overline{u}_h(0)=u^0_k \text { and } \overline{u}_h(t)=u^k_h \text { for }t\in {]kh{-}h,kh]}. \end{aligned}$$

It is then standard to apply a Banach-space valued version of Helly’s selection principle to obtain a weakly convergent subsequence with a limit function u and to derive an upper energy estimate, i.e. (E) on [0, T] but with “\(\le \)” instead of “\(=\)”, see the general references [10, 15, 16].

The major difficulty in concluding the proof is to show that the limit function u still satisfies the stability condition (S). Lemma 4.1 guarantees strong closedness, while only weak convergence can be inferred. Since the sets \({{\mathcal {S}}}(t)\) of stable states are typically not convex (see (4.3) or an example), they are also not weakly closed.

To generate the missing strong convergence, the usual approach is to assume that the sublevels of \({{\mathcal {J}}}\) are compact in X and that \({{\mathcal {J}}}(u^0)<\infty \). Then, all approximations \(u^k_h\) lie in such a compact set and weak convergence turns into strong convergence, and existence follows by the standard arguments as given in the above references.

For completeness we state the following existence result, where the compactness of the sublevels of \({{\mathcal {J}}}\) is slightly weakened by exploiting the a priori bound \(\Vert u^k_h\Vert \le \Vert u^0\Vert \). We emphasize that our main result stated in Theorem 1.1 does not impose any compactness assumption. Moreover, it provides uniqueness, which cannot be derived directly from the theory of energetic solutions. Thus, the results of the following proposition are all contained in Theorem 1.1, but here we use the standard theory only, not relying on the equivalence to the GS \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\).

Proposition 4.6

(Existence theory using compactness) Consider the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) with \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) where \({{\mathcal {J}}}:X\rightarrow [0,\infty ]\) is lower semicontinuous, convex, and 1-homogeneous. Impose additionally, that the functional

$$\begin{aligned} {{\mathcal {G}}}: X\rightarrow [0,\infty ]; \ u \mapsto {{\mathcal {J}}}(u)+ \Vert u\Vert \end{aligned}$$

has compact sublevels. Then for all \(u^0\in X\) with \({{\mathcal {J}}}(u^0)<\infty \) there exists a energetic solution u in the sense of (4.1) with \(u(0)=u^0\). Moreover, this solution satisfies

$$\begin{aligned} {{\mathcal {J}}}(u(r))\ge {{\mathcal {J}}}(u(t)) \ \text { and } \ \Vert u(r)\Vert \ge \Vert u(t)\Vert \quad \text {for } 0 \le r< t. \end{aligned}$$

Proof

The only non-trivial part of the proof is to show that the approximations \(u^k_h\) satisfy an a priori bound \({{\mathcal {G}}}(u^k_h)\le C\). If this is done then, the standard existence theory (see e.g. [15, Thm. 6.3(2)]) applies.

To provide the bound on \({{\mathcal {G}}}\) we analyze the incremental problems in a little more detail. Since \(u^k_h\) is a minimizer, we have

$$\begin{aligned} kh{{\mathcal {J}}}(u^k_h)+ \Vert u^k_h{-}u^{k-1}_h\Vert \le kh{{\mathcal {J}}}(u^{k-1}_h)+ 0, \end{aligned}$$

which implies \({{\mathcal {J}}}(u^k_h) \le {{\mathcal {J}}}(u^{k-1}_h)\).

We also claim \(\Vert u^k_h\Vert \le \Vert u^{k-1}_h\Vert \). To see this, we may restrict to the case \(u^k_h \ne u^{k-1}_h\), since otherwise the inequality holds trivially. Then, the Euler–Lagrange equation reads

$$\begin{aligned} 0 \in \frac{1}{\Vert u^k_h {-} u^{k-1}_h\Vert }\,E\big ( u^k_h {-} u^{k-1}_h\big ) + kh\, \partial {{\mathcal {J}}}(u^k_h). \end{aligned}$$

Testing this equation with \(u^k_h\) and using that \(\langle \eta ,u\rangle = {{\mathcal {J}}}(u)\ge 0\) for all \(\eta \in \partial {{\mathcal {J}}}(u)\) we conclude

$$\begin{aligned} \Vert u^k_h\Vert ^2 - \big (u^{k-1}_h\big |u^k_h\big ) = \langle E\big ( u^k_h {-} u^{k-1}_h\big ), u^k_h\rangle = -\Vert u^k_h {-} u^{k-1}_h\Vert \,kh\, {{\mathcal {J}}}(u^k_h) \le 0. \end{aligned}$$

This implies \(\Vert u^k_h\Vert \le \Vert u^{k-1}_h\Vert \) as desired.

Together, we obtain the monotonicity \({{\mathcal {G}}}(u^k_h)\le {{\mathcal {G}}}(u^0)\) and conclude that all values \(\overline{u}_h(t)\) of the approximating functions lie in the compact set \(K:= \big \{\, u\in X \, \big | \, {{\mathcal {G}}}(u)\le {{\mathcal {G}}}(u^0) \,\big \} \). Thus existence follows.

The monotonicity of \({{\mathcal {J}}}\) follows from Lemma 4.3. Moreover, for the approximation functions \(\overline{u}_h\) we have \(\Vert \overline{u}_h(r)\Vert \ge \Vert \overline{u}_h(t)\Vert \) for \(0 \le r< t\). Because the weak convergence in the compact set K is turned into strong convergence, this inequality survives for the limit function as well. \(\square \)

It is an open question how to show the monotonicity of \(t\mapsto \Vert u(t)\Vert \) directly for all energetic solutions.

4.4 Advanced Properties of Energetic Solutions

The next result characterizes jumps and shows that along a jump the solution can be modified with any value on the straight line connecting the left and the right limit. The result is essentially contained in [15], but we provide a full proof for the present special case.

To state the result we introduce the notation of left and right limits \(u(t^\pm )\) of \(u:{[0,\infty [} \rightarrow X\), which exist since \({\mathrm {Var}}_{\Vert \cdot \Vert }(u,[0,T])\) is finite for all \(T>0\). We set

$$\begin{aligned} u(t^-)= \lim _{\tau \rightarrow t^-} u(\tau ) \quad \text { and } \quad u(t^+)= \lim _{\tau \rightarrow t^+} u(\tau ), \end{aligned}$$

and use a corresponding notation for the dissipation, namely

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u,{[t_1,t_2[})=\lim _{\tau \rightarrow t^-_2} {\mathrm {Var}}_{\Vert \cdot \Vert }(u,[t_1,\tau ]) = {\mathrm {Var}}_{\Vert \cdot \Vert }(u,[t_1,t_2]) - \Vert u(t_2){-}u(t_2^-)\Vert . \end{aligned}$$

In the following result the assertions (i) to (iii) hold even without convexity, see [14, Lem. 2.1.13]. For (iv) convexity is needed, but not the 1-homogeneity.

Proposition 4.7

(Jumps in ERIS) Consider a solution \(u:{[0,\infty [} \rightarrow X\) of ERIS and a time \(t>0\) such u is not continuous at t (i.e. not all of the three values \(u(t^-)\), u(t), and \(u(t^+)\) are the same). Then, we have the relations

$$\begin{aligned} \begin{aligned} \text {(i) }&{{\mathcal {E}}}(t,u(t^+))+\Vert u(t^+){-}u(t^-)\Vert = {{\mathcal {E}}}(t,u(t^-)), \\ \text {(ii) }&{{\mathcal {E}}}(t,u(t))+\Vert u(t){-}u(t^-)\Vert = {{\mathcal {E}}}(t,u(t^-)), \\ \text {(iii) }&\exists \, \theta _* \in [0,1]: \quad u(t)=(1{-}\theta _*) u(t^-) + \theta _* u(t^+),\\ \text {(iv) }&\forall \,\theta \in [0,1]:\ {{\mathcal {E}}}\big (t,(1{-}\theta ) u(t^-) {+} \theta u(t^+)\big )= (1{-}\theta ){{\mathcal {E}}}(t,u(t^-)) + \theta {{\mathcal {E}}}(t,u(t^+)) . \end{aligned} \end{aligned}$$
(4.6)

Indeed, if we modify u at the time t by replacing \(\theta _*\) in (iii) by any other \(\theta \in [0,1]\), we still have a solution of ERIS, in particular \((1{-}\theta ) u(t^-) {+} \theta u(t^+) \in {{\mathcal {S}}}(t)\).

Proof

The upper energy estimates

$$\begin{aligned} {{\mathcal {E}}}(t,u(t^+))+\Vert u(t^+){-}u(t)\Vert \le {{\mathcal {E}}}(t,u(t)) \text { and } {{\mathcal {E}}}(t,u(t))+\Vert u(t){-}u(t^-)\Vert \le {{\mathcal {E}}}(t,u(t^-)) \end{aligned}$$

follow from the energy balance on \([t,t_2]\) and \([t_1,t]\) and taking the limits \(t_2\rightarrow t^+\) and \(t_1\rightarrow t^-\), respectively. The lower estimates follow from the stability of u(t) and \(u(t^-)\), respectively. For the latter stability use Lemma 4.1 and \({{\mathcal {S}}}(t_n) \ni u(t_n)\rightarrow u(t^-)\) for \(t_n\rightarrow t^-\).

Having the two identities we obtain by summing

$$\begin{aligned} {{\mathcal {E}}}(t,u(t^-))&= {{\mathcal {E}}}(t,u(t^+)) + \Vert u(t^-){-}u(t)\Vert + \Vert u(t){-}u(t^+)\Vert \\&\ge {{\mathcal {E}}}(t,u(t^+))+\Vert u(t^+){-}u(t^-)\Vert \ge {{\mathcal {E}}}(t,u(t^-)), \end{aligned}$$

where the last estimate follows from the stability of \(u(t^-)\). We conclude \(\Vert u(t^-){-}u(t)\Vert + \Vert u(t){-}u(t^+)\Vert =\Vert u(t^+){-}u(t^-)\Vert \), which implies (iii), since we are in a Hilbert space.

To establish (iv) we use the abbreviation \(u_\theta :=(1{-}\theta ) u(t^-) + \theta u(t^+)\). On the one hand, the convexity of \({{\mathcal {J}}}\) and the established identity give the upper estimate

$$\begin{aligned} {{\mathcal {E}}}(t,u_\theta ) \le {{\mathcal {E}}}(t,u_0) - \theta \Vert u_1{-}u_0\Vert . \end{aligned}$$

On the other hand, the stability of \(u_0=u(t^-)\) gives the lower estimate

$$\begin{aligned} {{\mathcal {E}}}(t,u_\theta ) \ge {{\mathcal {E}}}(t,u_0) - \Vert u_\theta {-}u_0\Vert ={{\mathcal {E}}}(t,u_0) - \theta \Vert u_1{-}u_0\Vert . \end{aligned}$$

The last two estimates imply (iv). The stability of \(u_\theta \) now follows from \(u_0 \in {{\mathcal {S}}}(t)\), namely

$$\begin{aligned} {{\mathcal {E}}}(t,u_\theta )={{\mathcal {E}}}(t,u_0)- \Vert u_\theta {-}u_0\Vert \le {{\mathcal {E}}}(t,\widetilde{u}) + \Vert \widetilde{u}{-}u_0\Vert -\Vert u_\theta {-}u_0\Vert \le {{\mathcal {E}}}(t,u_0) + \Vert \widetilde{u} {-}u_\theta \Vert . \end{aligned}$$

This proves the result. \(\square \)

4.5 Time-Dependent Dissipation

Instead of looking at the time-dependent energy \({{\mathcal {E}}}(t,u)=t{{\mathcal {J}}}(u)\) and the time-independent dissipation \(\Vert \dot{\Vert }\), we may multiply the equation by 1/t to obtain the ERIS \((X,{{\mathcal {J}}}(u),\frac{1}{t}\Vert \cdot \Vert )\), where now the time-dependence is in the dissipation functional

$$\begin{aligned} {{\mathcal {R}}}(t,\dot{u})=\frac{1}{t}\Vert \dot{u}\Vert . \end{aligned}$$

The stability sets \({{\mathcal {S}}}(t)\) are still the same as well as the differential form of the (formal) power balance:

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} {{\mathcal {E}}}(t,u(t)) + \Vert \dot{u}(t)\Vert = \partial _t{{\mathcal {E}}}(t,u(t)) \quad \Longleftrightarrow \quad \frac{\mathrm {d}}{\mathrm {d}t} {{\mathcal {J}}}(u(t)) + \frac{1}{t}\Vert \dot{u}(t)\Vert =0. \end{aligned}$$

Thus, the rigorously formulated energy balance reads

$$\begin{aligned} {{\mathcal {J}}}(u(t)) + \int _{[s,t]} \frac{1}{\tau }\Vert \mathrm {d}u(\tau )\Vert = {{\mathcal {J}}}(u(s)) \text { for } 0< s < t. \end{aligned}$$
(4.7)

For a general continuous function \(\phi :{]0,\infty [}\rightarrow {{\mathbb {R}}}\) and \(0<s<t\), we can define the weighted variation via

$$\begin{aligned} \begin{aligned} \int _{[s,t]} \phi (\tau )\ \Vert \mathrm {d}u(\tau )\Vert&:= \sup \Big \{ \sum _{j=1}^N \min _{\tau \in [t_{j-1},t_j]} \phi (\tau ) \ \Vert u(t_j){-}u(t_{j-1})\Vert \\&\quad \Big |\; N\in {{\mathbb {N}}}, \ s=t_0{-}t_1{<}\cdots {<}t_N=t\Big \}, \end{aligned} \end{aligned}$$
(4.8)

see also [14, App. B.5].

5 From GS to ERIS

We now show that the solutions \(w(s)={{\mathfrak {T}}}_s(w(0))\) give rise to energetic solutions for the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\). For this we define the mapping \(S:{[0,\infty [} \rightarrow {[0,\infty [} \) via the relation

$$\begin{aligned} S(0)= 0 \quad \text { and } S(t) = \min \big \{\, s \ge 0 \, \big | \, \Vert w'_+(s)\Vert \le 1/t \,\big \} \text { for }t>0. \end{aligned}$$

Note that \(g_w:s\mapsto \Vert w'_+(s)\Vert \) is non-increasing and continuous from the right, hence it is lower semicontinuous and the minimum is really attained. In particular, we have \(g_w(S(t))= \Vert w'_+(S(t))\Vert \le 1/t\) by construction. As a consequence, S is non-decreasing and continuous from the left. Relation (2.6) provides the upper bound \(S(t) \le t\,\sqrt{2}\,\Vert w(0)\Vert \).

If \(g_w\) has a jump at \(s_*>0\) with

$$\begin{aligned} a_*= g_w(s_*^-):=\lim _{s\rightarrow s_*^-} g_w(s) \gneqq g_w(s_*)=b_*, \end{aligned}$$

then S has a plateau with \(S(t)=s_*\) for \(t\in {]1/a_*,1/b_*]}\). Vice versa, if \(g_w\) has a plateau \({[s_1,s_2[}\) with \(g_w(s)=a_*>0\), then S has a jump in \(t_*:=1/a_*\) with \(s_1=S(t_*)=\lim _{t\rightarrow t_*^-} S(t)\) and \(s_2=\lim _{t\rightarrow t^+_*} S(t)\). If \(g_w\) has a plateau with value \(a_*=0\), then the plateau is \({[s_1,\infty [}\), and S remains bounded by \(s_1\).

We now define the function

$$\begin{aligned} u:\left\{ \begin{array}{lll}{[0,\infty [} &{}\rightarrow &{}X,\\ t &{}\mapsto &{}w(S(t)). \end{array} \right. \end{aligned}$$
(5.1)

By construction, the function u is continuous from the left, because \(w:{[0,\infty [} \rightarrow X\) is continuous and \(S:{[0,\infty [} \rightarrow {[0,\infty [}\) is continuous from the left.

The next result is crucial for connecting the gradient system \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\) with the RIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\).

Proposition 5.1

(Stability) Consider a solution w of \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\), then

$$\begin{aligned} \forall \ s>0:\quad w(s) \in {{\mathcal {S}}}\big (1/\Vert w'_+(s)\Vert \big ). \end{aligned}$$
(5.2)

Proof

For \(s>0\) the right derivative \(w'_+(s)\in X\) exists and \(0\in Ew'_+(s) + \partial {{\mathcal {J}}}(w(s))\). Then, the convexity of \({{\mathcal {J}}}\) gives the estimate

$$\begin{aligned} {{\mathcal {J}}}(v)\ge {{\mathcal {J}}}(w(s))+ \langle Ew'_+(s),v{-}w(s)\rangle \ge {{\mathcal {J}}}(w(s)) - \Vert w'_+(s)\Vert \Vert v{-}w(s)\Vert . \end{aligned}$$

Together with \({{\mathcal {E}}}(t,u)=t {{\mathcal {J}}}(u)\) this means \(w(s) \in {{\mathcal {S}}}(t)\) for \(t=1/\Vert w'_+(s)\Vert \). \(\square \)

We are now ready to establish our main existence result for the ERIS, which is obtained as a consequence of the existence result for the gradient system and the corresponding reparametrization. We emphasize that we do not assume any type of compactness.

Theorem 5.2

(From GS to ERIS) Let \(w:{[0,\infty [} \rightarrow X\) be a solution of the gradient system \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\) with \({{\mathcal {J}}}(w(0))<\infty \), then the function \(u:{[0,\infty [} \rightarrow X\) defined in (5.1) is an energetic solution for the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) in the sense of (4.1).

Proof

For \(w(0)=0\) we have \(w\equiv 0\) and hence \(u\equiv 0\), which is trivially an energetic solution of ERIS. Thus, we now assume \(w(0)\ne 0\).

Stability (S): For \(t=0\) the stability \(u(0)=w(0)\in {{\mathcal {S}}}(0)=X\) is trivial. For \(t>0\) we have \(s=S(t)>0\) and conclude

$$\begin{aligned} u(t)=w(S(t)) \in S\big (1/g_w(S(t))\big ) \subset {{\mathcal {S}}}(t), \end{aligned}$$

where we used \(g_w(S(t))\le 1/t\). Hence, (4.1a) is established.

Energy balance (E): According to the general theory of RIS, it is sufficient to establish an upper energy estimate, since the lower estimate is a consequence of the stability, cf. [14, Prop. 2.1.23] or [15].

In principle the energy balance follows from the energy-dissipation balance (2.3) for the gradient system, where we would like to use the time reparametrization \(s=S(t)\) or \(t=g_w(s)=1/\Vert w'_+(s)\Vert \) giving formally \(\Vert w'(s)\Vert \;\!{\mathrm {d}}s = \;\!{\mathrm {d}}t\). However, because of jumps we have to be more careful and estimate the dissipation explicitly. We have

$$\begin{aligned} {\mathrm {Var}}_{\Vert \cdot \Vert }(u,[r,t]) = \sup \Big \{\, \sum _{j=1}^N \Vert u(t_j){-}u(t_{j-1})\Vert \; \Big | \; N\in {{\mathbb {N}}}, \ r= t_0<t_1< \cdots < t_N= t \,\Big \} . \end{aligned}$$

We choose a finite partition \((t_j)_{j=0,1,..,N}\) of [rt] such that \({\mathrm {Var}}_{\Vert \cdot \Vert }(u,[t,T])\) is approximated up to an error smaller than \(\varepsilon \) and set \(s_j=S(t_j)\). The monotonicity of \(g_w\) and (2.3) yield

$$\begin{aligned} {{\mathcal {J}}}(w(s_{j-1}))&= {{\mathcal {J}}}(w(s_j))+\int _{s_{j-1}}^{s_j} \Vert w_+'(s)\Vert ^2 \;\!{\mathrm {d}}s \ge {{\mathcal {J}}}(w(s_j))+\Vert w'(s_j)\Vert \int _{s_{j-1}}^{s_j} \Vert w_+'(s)\Vert \;\!{\mathrm {d}}s\\&\ge {{\mathcal {J}}}(w(s_j))+\frac{1}{t_j} \Vert w(s_j){-}w(s_{j-1})\Vert . \end{aligned}$$

In terms of the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) and the function u, this means

$$\begin{aligned} {{\mathcal {E}}}(t_j,u(t_j)) + \Vert u(t_j){-}u(t_{j-1})\Vert \le {{\mathcal {E}}}(t_{j-1},u(t_{j-1})) + \int _{t_{j-1}}^{t_j} \partial _\tau {{\mathcal {E}}}(\tau ,u(t_{j-1})) \;\!{\mathrm {d}}\tau . \end{aligned}$$

Summing over \(j\in \{1,..,N\}\) we obtain

$$\begin{aligned} {{\mathcal {E}}}(t,u(t))+ {\mathrm {Var}}_{\Vert \cdot \Vert }(u;[r,t])\le \varepsilon + {{\mathcal {E}}}(r,u(r)) + \sum _{j=1}^N \int _{t_{j-1}}^{t_j} \partial _\tau {{\mathcal {E}}}(\tau ,u(t_{j-1})) \;\!{\mathrm {d}}\tau . \end{aligned}$$

Taking \(\varepsilon \) and the fineness of the partition to 0 simultaneously, we obtain the desired energy estimate (4.1b) on all intervals [rt] with \(0<r<t\).

Initial condition (I): Using \({{\mathcal {J}}}(w(0))={{\mathcal {J}}}(u(0)) <\infty \) and the monotonicity of \({{\mathcal {J}}}\), we obtain \(\int _0^1 {{\mathcal {J}}}(u(t))\;\!{\mathrm {d}}t \le {{\mathcal {J}}}(u(0))<\infty \). Thus, Proposition 4.4 provides the desired continuity, and (4.1) is established as well. \(\square \)

6 From ERIS to GS

Here we show that every energetic solution \(u:{[0,\infty [} \rightarrow X\) for the ERIS \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\) gives rise to a solution \(w:{[0,\infty [} \rightarrow X\) for the GS \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\). We do this by reparametrization. However, at jumps we need to fill in pieces, which can be done in a piecewise affine manner.

Affine interpolations are defined for each function \(g:{[0,\infty [} \rightarrow V\) having left and right limits \(g(t^\pm )\) for all t, where \(g(0^-):=g(0)\). The interpolant reads

$$\begin{aligned} g_\theta (t)= (1{-}\theta ) g(t^-) + \theta g(t^+), \text { where }\theta \in [0,1] \text { and } t\ge 0, \end{aligned}$$

The time reparametrization is given in terms of the left-continuous function

$$\begin{aligned} \widehat{s}(t)&=\int _0^t \!\tau \Vert \mathrm {d}u(\tau )\Vert \\&:= \sup \Big \{\, \sum _{j=1}^N t_{j-1} \Vert u(t_j){-}u(t_{j-1})\Vert \; \Big | \; N\in {{\mathbb {N}}}, \ 0\le t_0{<}t_1{<}\cdots {<}t_N\le t \,\Big \} . \end{aligned}$$

An inverse of this function is given by

$$\begin{aligned} \widehat{t}(s) := \inf \{\, t\ge 0 \, | \, \widehat{s} (t)\ge s \,\} . \end{aligned}$$

Hence, \(\widehat{s}\) will have a jump at \(t_*\), if u has a jump at \(t_*\), more precisely \(\widehat{s}(t^+_*)-\widehat{s}(t_*)= t_*\Vert u(t^+_*){-}u(t_*)\Vert \). In contrast, \(\widehat{t}\) will have a plateau, namely \(\widehat{t}(s)= t_*\) for \(s\in {[\widehat{s}(t_*),\widehat{s}(t^+_*)[}\). Moreover, if \(\widehat{s}\) has a plateau for \({]t_1,t_2]}\) with value \(s_*\), then \(\widehat{t}\) has a jump at \(s_*\).

For a given energetic solution \(u:{[0,\infty [} \rightarrow X\), we define the function

$$\begin{aligned} w(s) = u_\theta (\widehat{t}(s)) \text { for } s= \widehat{\sigma }_\theta (s), \quad \text {where } \widehat{\sigma }(s):=\widehat{s}(\widehat{t}(s)). \end{aligned}$$
(6.1)

Note that \(\widehat{\sigma }(s)=\min \big \{\, \widehat{s}(t) \, \big | \, t\ge 0,\ \widehat{s}(t)\le s \,\big \} \le s\) and \(\widehat{\sigma }(s)< s\) only in regions \({]s_1,s_2[}\) which are not covered by the range of \(\widehat{s} \), i.e. there exists \(t_*\) such that \(\widehat{s}(t_*)\le s_1 < s_2 \le \widehat{s}(t_*^+)\) and \(\widehat{t}(s)=t_*\).

Theorem 6.1

(From energetic solutions to gradient-flow solutions) If \(u:{[0,\infty [} \rightarrow X\) is an energetic solution for \((X,{{\mathcal {E}}},\Vert \cdot \Vert )\), then the function \(w:{[0,\infty [} \rightarrow X\) defined in (6.1) is a solution for the gradient system \((X,{{\mathcal {J}}},\frac{1}{2}\Vert \cdot \Vert ^2)\), i.e. it satisfies (1.1).

Proof

Step 1: We first show that w lies in \(\mathrm {H}^1_\text {loc}({]0,\infty [};X)\). For this, we need to estimate \(\frac{1}{s_2{-}s_1} \Vert w(s_2){-}w(s_1)\Vert ^2\) as follows. We have \(s_j =(1{-}\theta _j) \widehat{s}(t_j)+ \theta _j \widehat{s}(t_j^+)\) for suitable \(\theta _j \in [0,1]\). With this choice and the definition of \(\widehat{s}\) based on the variation of u we obtain

$$\begin{aligned} \begin{aligned} s_2{-}s_1&= \theta _2\big (\widehat{s}(t_2^+)- \widehat{s}(t_2) \big ) + \big ( \widehat{s}(t_2) - \widehat{s}(t_1^+) \big ) + (1{-}\theta _1) \big (\widehat{s}(t_1^+) - \widehat{s}(t_1) \big )\\&\ge \theta _2t_2 \Vert u(t_2^+){-}u(t_2)\Vert + t_1{\mathrm {Var}}_{\Vert \cdot \Vert }(u,{]t_1,t_2[}) + (1{-}\theta _1)t_1\Vert u(t_1^+){-}u(t_1)\Vert \\&\ge t_1\Vert \, u_{\theta _2}(t_2) - u_{\theta _1}(t_1)\,\Vert \ = \ t_1\Vert w(s_2){-}w(s_1)\Vert . \end{aligned} \end{aligned}$$
(6.2)

We conclude that for all \(0< s_1<s_2\) we have

$$\begin{aligned} \frac{1}{s_2{-}s_1} {\Vert w(s_2){-}w(s_1)\Vert ^2} = \frac{\Vert w(s_2){-}w(s_1)\Vert }{s_2{-}s_1} \, \Vert u_{\theta _2}(t_2) {-} u_{\theta _1}(t_1) \Vert \le \frac{1}{t_1}\,\Vert u_{\theta _2}(t_2) {-} u_{\theta _1}(t_1) \Vert . \end{aligned}$$

Hence, for any partition \(0<r=s_0< s_1< \cdots<s_{N-1}<s_N=s\) of [rs] we obtain

$$\begin{aligned}&\sum _{j=1}^N \frac{ \Vert w(s_j){-}w(s_{j-1}\Vert ^2}{s_j{-}s_{j-1}} \le \sum _{j=1}^N \frac{1}{t_{j-1}} \Vert u_{\theta _j}(t_j){-}u_{\theta _{j-1}}(t_{j-1})\Vert \le \frac{1}{t_0}{\mathrm {Var}}_{\Vert \cdot \Vert }\big (u, [\widetilde{t}(r), \widetilde{t}(s^+)] \big )< \infty . \end{aligned}$$

Since the partition was arbitrary and since every energetic solution has bounded variation on all intervals compactly contained in \({]0,\infty [}\), we conclude \(w|_{[r,s]} \in \mathrm {H}^1([r,s];X)\), which is the desired result.

Step 2: More precisely, for \(s_j\) with \(\widehat{\sigma }(s_j)=s_j\) we have

$$\begin{aligned} \int _{s_1}^{s_2} \Vert w'(s)\Vert ^2 \;\!{\mathrm {d}}s \le \int _{\widetilde{t}(s_1)}^{\widetilde{t}(s_2)} \frac{1}{t} \Vert \mathrm {d}u(t)\Vert = {{\mathcal {J}}}\big (u(\widetilde{t}(s_1))\big ) {-} {{\mathcal {J}}}\big (u(\widetilde{t}(s_2))\big ) = {{\mathcal {J}}}(w(s_1)){-} {{\mathcal {J}}}(w(s_2)) , \end{aligned}$$

where we used the energy balance (4.7) for the time-dependent dissipation model.

Moreover, dividing (6.2) by \((s_2{-}s_1)t_1\) we may pass to the limit \(s_2\rightarrow s^+_1\) for almost all \(s_1=s>0\) and obtain

$$\begin{aligned} \frac{1}{t_1}= \frac{1}{\widehat{t}(s)} = \Vert w'_+(s)\Vert . \end{aligned}$$

Step 3: We now want to show that w solves the gradient-flow equation (1.1), i.e. \(0\in Ew'(s) + \partial {{\mathcal {J}}}(w(s))\) for a.a. \(s>0\). For this we use the stability of u. Indeed by the construction of the interpolant \(u_\theta (t)\) we have the stability of \(u_\theta (t)\in {{\mathcal {S}}}(t)\), see Proposition 4.7. Since \(u_\theta (t)\) is a minimizer of \(\widetilde{u} \mapsto t{{\mathcal {J}}}(\widetilde{u}) + \Vert \widetilde{u}{-}u_\theta (t)\Vert \), we know that \(\partial {{\mathcal {J}}}(u_\theta (t))\) is nonempty and that \(\Vert \partial ^0{{\mathcal {J}}}(u_\theta (t))\Vert _* \le 1/t\). Translating this to the variable \(s=\widehat{s} (t)\), we obtain

$$\begin{aligned} \forall \,s>0 : \quad \Vert \eta (s)\Vert \le 1/\widehat{t}(s)= \Vert w'(s)\Vert \quad \text {with } \eta (s)=\partial ^0{{\mathcal {J}}}(w(s)) , \end{aligned}$$
(6.3)

where the last relation follows with Step 2.

Since \({{\mathcal {J}}}\) is convex and \(w|_{[s_1,s_2]}\in \mathrm {H}^1([s_1,s_2];X)\) with \([s_1,s_2]\ni s\mapsto {{\mathcal {J}}}(w(s))\) bounded and decaying we can apply the chain rule (see [7, Lem. 3.3]) and obtain

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s} {{\mathcal {J}}}(w(s)) = \langle \eta (s),w'(s)\rangle \quad \text { a.e.} \end{aligned}$$

Integration gives the first identity in the following relations, and exploiting (6.3) and the energy estimate from Step 2 yields

$$\begin{aligned} {{\mathcal {J}}}(w(s_2))-{{\mathcal {J}}}(w(s_1))&= \int _{s_1}^{s_2}\langle \eta (s),w'(s)\rangle \;\!{\mathrm {d}}s \ge \int _{s_1}^{s_2}{-} \Vert \eta (s)\Vert \,\Vert w'(s)\Vert \;\!{\mathrm {d}}s \\&\overset{{(\text {6.3})}}{\ge }- \int _{s_1}^{s_2}\Vert w'(s)\Vert ^2 \;\!{\mathrm {d}}s \overset{\text {Step 2}}{\ge }{{\mathcal {J}}}(w(s_2))-{{\mathcal {J}}}(w(s_1)) . \end{aligned}$$

Thus, we conclude that all “\(\ge \)” must be equalities. The first estimate shows \(\eta (s)= -\alpha (s) Ew'(s)\) for some \(\alpha (s)\ge 0\). By (6.3) we know \(\alpha (s) \le 1\), but then the second estimate gives \(\alpha (s)=1\) for a.e. \(s>0\). Thus, \(\eta (s)=-Ew'(s)\) for a.e. \(s>0\) gives the desired equation \(0\in Ew'(s) + \partial {{\mathcal {J}}}(w(s))\). \(\square \)

With the available link from the ERIS to the GS, we are now able to conclude the proof of our main theorem on the ERIS by exploiting the existence and uniqueness results for the GS in Sect. 2, which do not need any compactness assumption.

Proof of Theorem 1.1

Existence of solution for the ERIS follows from Theorem 5.2, which shows that suitably reparametrizing the solutions of the GS leads to energetic solutions in the sense of (4.1) for the ERIS.

The uniqueness of energetic solutions follows using Theorem 6.1, since every energetic solution generates a solution of the gradient system. Since the latter is unique, the uniqueness of energetic solutions follows if we choose the unique left-continuous variant and neglect the freedom to choose the value \(u_\theta (t)=(1{-}\theta )u(t^-)+\theta u(t^+)\) of an energetic solution at an jump time t. \(\square \)

In our main theorem, we have restricted the existence and uniqueness result for the ERIS to initial values \(u^0\) with \({{\mathcal {J}}}(u^0)<\infty \). For the GS the existence and uniqueness result extends to all initial conditions \(u^0\in X\) because dom\(({{\mathcal {J}}})\) is assumed to be dense. However, there is a subtle issue about the rate-independent rescaling \(u(t)=w(S(t))\) which may lead to \({\mathrm {Var}}_{\Vert \cdot \Vert }(u;[0,1])=\infty \) when doing the corresponding reparametrization. It remains an open problem to provide an intrinsic formulation of energetic solutions and their attainment of the initial condition in the case of infinite variation near \(t=0\), see Remark 4.5.