1 Introduction

A planar network is a pair \((\Gamma ,G)\), where G is an abstract connected graph with edges homeomorphic to the interval [0, 1] and \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a continuous map, see Definition 2.2. We shall mostly consider triple junctions networks, that are networks such that the edges of G either meet in junctions of order three, or end on terminal points of the graph, and the restriction \(\gamma ^i\) of \(\Gamma \) to each edge is a \(C^1\)-embedding, see Definition 2.4. If we further require that embedded edges meet forming angles equal to \(\tfrac{2}{3}\pi \), the network is said to be regular, and if also such embeddings are straight segments, it is said to be minimal, see Definition 2.4.

Minimal networks are easily seen to be critical points of the length functional \({{\textrm{L}}}\), which is defined by the sum of the lengths of the embedded edges via the parametrization \(\Gamma \) of a network. This class includes Steiner trees of finitely many points in the plane, that is, networks minimizing the length among those connecting such given points [50].

In this paper we investigate functional analytic and stability properties of the length functional and of the \(L^2\)-gradient flow of \({{\textrm{L}}}\).

The first of our main results consists in proving Łojasiewicz–Simon gradient inequalities for the length functional in \(H^2\)-neighborhoods of minimal networks.

Theorem 1.1

(cf. Corollary 3.14) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exist \(C_{\textrm{LS}},\sigma >0\) and \(\theta \in (0,\tfrac{1}{2}]\) such that the following holds.

If \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network of class \(H^2\) such that \(\Gamma \) and \(\Gamma _*\) have the same endpoints and

$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \sigma , \end{aligned}$$

where \(\gamma ^i_*, \gamma ^i\) are the restrictions of \(\Gamma _*, \Gamma \) to the i-th edge of G, respectively, then

$$\begin{aligned} \left| {\textrm{L}}(\Gamma )-{\textrm{L}}(\Gamma _*) \right| ^{1-\theta } \le C_{\textrm{LS}} \left( \sum _i \int _0^1 |{\varvec{k}}^i|^2 \,\mathrm ds \right) ^{\frac{1}{2}}, \end{aligned}$$
(1.1)

where \({\varvec{k}}^i\) is the curvature of \(\gamma ^i\).

Estimates like (1.1) are named after Łojasiewicz and Simon due to their seminal works [38, 39, 58], where they firstly proved and employed analogous inequalities for analytic functionals over finite or infinite dimensional linear spaces. As we shall see, the validity of a Łojasiewicz–Simon inequality is sufficient to imply strong stability properties of critical points of the energy under consideration.

Theorem 1.1 is actually a particular case of a more general result yielding a Łojasiewicz–Simon inequality for the length functional among triple junctions networks, that is, one does not need to ask that edges form angles equal to \(\tfrac{2}{3}\pi \) at junctions to get a version of (1.1), see Theorem 3.13. As discussed in Remark 3.15, it is even possible to generalize the inequality to triple junctions networks letting endpoints free to vary.

Proving Theorem 1.1 as a consequence of a Łojasiewicz–Simon inequality holding among triple junctions networks not only gives an inequality for a much larger class of networks, but also simplifies its proof, since triple junctions networks do not need to satisfy an additional nonlinear requirement on the angles at junctions. Indeed the proof of Theorem 3.13, which implies Theorem 1.1, eventually follows employing a by now established method for proving these kinds of inequalities for extrinsic geometric functionals [10, 13, 45, 46, 52], the method relying on linear Functional Analysis.

Once one is able to parametrize the considered competitors as normal graphs over the critical point, the inequality eventually follows from a general functional analytic result, see Proposition 3.12, based on [9]. However, differently from the previous cases, the nonsmooth structure of networks necessarily implies technical complications, as networks close to a fixed minimal one \(\Gamma _*\) cannot be written as normal graphs over the critical point. Hence, in order to perform a graph parametrization of networks close to \(\Gamma _*\) we need to allow for graphs having both a normal and a tangential component with respect to \(\Gamma _*\). This would generally violate the assumptions needed to produce a Łojasiewicz–Simon inequality, cf. Proposition 3.12, since variations of \(\Gamma _*\) via tangential directions are equivalent to reparametrizations of the curves of a network and thus generate an infinite dimensional kernel for a geometric functional like the length. We fix this issue by prescribing that tangential components of these graph parametrizations linearly depend on normal ones. Since a relation between such normal and tangential components is naturally satisfied at the junctions, the chosen dependence of tangential components on normal ones is given by a suitable prolongation of the relations at junctions on the interior of the edges, see Proposition 3.4. We mention that an analogous construction has been also employed in [25].

It is possible to exploit the Łojasiewicz–Simon inequality in Theorem 1.1 for proving the stability of minimal networks with respect to the \(L^2\)-gradient flow of \({\textrm{L}}\), the so-called motion by curvature of networks. Along such flow, a regular network evolves keeping its endpoints fixed and moving with normal velocity equal to the curvature vector along each edge, see Sect. 2.3. The motion by curvature generalizes the one-dimensional mean curvature flow, called curve shortening flow, to the realm of singular one-dimensional objects given by planar networks.

Bronsard and Reitich [4] firstly attempted to find strong solutions to the motion by curvature, providing local existence and uniqueness of solutions for admissible initial regular networks of class \(C^{2+\alpha }\) with the sum of the curvature at the junctions equal to zero. Then the basic theory concerning short time existence and uniqueness of the motion by curvature was carried out in [43], and further improved in [26] in order prove existence of the flow starting from any regular network without extra assumptions on the initial datum. In [26] also the parabolic regularization of the flow has been addressed. It is known that the flow develops singularities, see Theorem 2.13, and a great deal of work has been done to understand the nature of these singularities and to define the flow past singularities [1, 7, 8, 30, 37, 40, 42,43,44].

However, exploiting the Łojasiewicz–Simon inequality we can prove that a flow starting sufficiently close to a minimal network in \(H^2\) exists for every time and smoothly converges to a (possibly different) minimal network. We mention that global existence of the flow starting close to critical points and convergence along diverging sequence of times has been firstly studied in [34]. Hence the next theorem recovers and improves the main results of [34], see also Theorem 5.3 below.

Theorem 1.2

(cf. Theorem 5.2) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exists \(\delta _{\Gamma _*}>0\) such that the following holds.

Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network having the same endpoints of \(\Gamma _*\) and such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2(\textrm{d}x)}\le \delta _{\Gamma _*}\). Then the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) starting from \(\Gamma _0\) exists for all times and it smoothly converges to a minimal network \(\Gamma _\infty \) such that \({\textrm{L}}(\Gamma _\infty )={\textrm{L}}(\Gamma _*)\), up to reparametrization.

The fact that \(\Gamma _\infty \) may be different from \(\Gamma _*\) in the above theorem cannot be avoided, as there exist examples of one-parameter families of minimal networks having fixed endpoints. Consider for instance networks given by concentric regular hexagons with segments connecting the six vertices to fixed vertices of a bigger regular hexagon, as depicted in Fig. 3 below.

Observe that in Theorem 1.2 the initial datum \(\Gamma _0\) does not need to satisfy further geometric properties at junctions or endpoints except being regular; this is possible as we shall employ the short time existence theory recently developed in [26], which removes the additional geometric assumptions required by previous existence theorems in [43, 44]. On the other hand, it is an open problem to understand whether a stability result like Theorem 1.2 holds for an initial datum \(\Gamma _0\) that is possibly non-regular, such as a \(\Gamma _0\) with only triple junctions but with angles possibly different from \(\tfrac{2}{3}\pi \) at junctions, and such that it is sufficiently close in \(H^2\) to a reference minimal network. For such a network \(\Gamma _0\), it is possible to define a motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) [30, 37] and \(\Gamma _t\) is instantaneously regular for \(t>0\), however the crucial short time properties we need (see Theorem 2.10 and Lemma 5.1) are delicate in this setting.

Finally, we observe that no minimizing properties on \(\Gamma _*\) in Theorem 1.2 are required. Instead, by means of a simple comparison argument it is possible to show that minimal networks automatically minimize the length among suitably small \(C^0\) perturbations, see Lemma 4.1. Once such minimality property is established, the proof of Theorem 1.2 follows adapting a general argument outlined in [45, 46].

As a consequence of Theorem 1.2, it immediately follows that if a motion by curvature smoothly converges to a minimal network along a sequence of times, then it smoothly converges as time increases, see Theorem 5.3.

The final main contribution of this work is given by the rigorous construction of an example of motion by curvature presenting a topological singularity in infinite time.

It is known, see Theorem 2.13, that the motion by curvature may develop singularities consisting in the blow up of the \(L^2\)-norm of the curvature or in the disappearance of a curve whose length tends to zero. There exist well known examples where both or one of the previous alternatives occur in finite time, see for instance [43] and [42, Section 6]. Here we firstly construct an example of a motion by curvature existing for every time and such that a curve of the evolving network vanishes in infinite time while the curvature remains uniformly bounded.

Theorem 1.3

(cf. Theorem 6.1) There exists a smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that the motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) exists for every time, the length of each curve \(\gamma ^i_t\) is strictly positive for any time, the curvature of each curve \(\gamma ^i_t\) is uniformly bounded from above, and \(\Gamma _t\) smoothly converges to a degenerate network \(\Gamma _\infty \) as \(t\rightarrow +\infty \), up to reparametrization. Specifically, the length of a distinguished curve \(\gamma ^0_t\) tends to zero as \(t\rightarrow +\infty \).

For proving the above theorem we will specifically consider the evolution of the network sketched in Fig. 1.

Fig. 1
figure 1

Continuous lines: initial datum \(\Gamma _0\). Dotted lines: limit degenerate network \(\Gamma _\infty \). By the choices of the mutual distances of endpoints, the dotted lines intersect forming angles equal to \(\tfrac{\pi }{3}\) and \(\tfrac{2}{3}\pi \)

For \(\Gamma _0\) as in Fig. 1, the endpoints determine a rectangle whose diagonals intersect forming angles exactly equal to \(\tfrac{\pi }{3}\) and \(\tfrac{2}{3}\pi \). We will prove that the central curve in Fig. 1 shrinks along the flow in infinite time and the four remaining curves converge to the diagonals connecting the four endpoints.

We will study the motion by curvature starting from such \(\Gamma _0\) explicitly. The symmetry chosen for producing this example allows to generalize some ideas from [18] to the context of networks in order to uniformly bound the curvature along the motion exploiting a monotonicity-type formula, see Lemma A.2. Eventually a comparison with solutions of heat-type equations show that convergence occurs in infinite time.

We stress that the example of Theorem 1.3 yields a simple and explicit flow converging to a degenerate critical point of the length, implying that the topology of the evolving networks changes in the limit. It is the change of topology the fundamental reason why such an example has to be studied individually and whose convergence cannot follow from the Łojasiewicz–Simon inequalities proved in Theorem 1.1 or Theorem 3.13. This simple example motivates to look for improvements on the general method for proving convergence, as well as the search of possibly weaker variants of Łojasiewicz–Simon inequalities, able to take into account these changes of topology in the limit. This project goes beyond the scope of the present paper and it is left for future investigation.

We finally observe that the analysis carried out for the study of the example proving Theorem 1.3 actually provides a family of examples of evolving networks whose curvature is uniformly bounded presenting every possible long time behavior: collapsing in infinite time of the length of a curve, collapsing in finite time of the length of a curve, convergence in infinite time to a regular network. The first case is the one claimed in Theorem 1.3, the other cases are simple modifications of it and they are described more precisely in the next remark.

Remark 1.4

Let \(L>0\). Consider a smooth regular initial datum \(\Gamma _{0,L}\) completely analogous to the one in Fig. 1, whose four curves connected to the endpoints are always the same, but the initial length of the central vertical edge equals L at time zero. In particular there is \({\overline{L}}\) such that \(\Gamma _{0,{\overline{L}}}\) is exactly the one in Fig. 1, that is, the resulting rectangle determined by the endpoints has sides of length \(2/\sqrt{3}\) and 2.

It can be proved (see Step 1 below) that the convexity of the four external curves is preserved along the flow, and then the central curve remains vertical and its length is strictly decreasing. Moreover the curvature of each curve is uniformly bounded (see first part of Step 2 below) and, by comparison, each of the four external curves always remains on the same side of the suitable line; for instance, the evolved curve \(\gamma ^1_t\) stays below the line passing through the same endpoint of \(\gamma ^1_t\) forming an angle equal to \(\tfrac{\pi }{6}\) with the horizontal axis (see the argument in Step 3 below).

Let \(T_L>0\) be the maximal time of existence of the motion by curvature starting from \(\Gamma _{0,L}\). By uniqueness and locality of the flow, it is clear that the evolution of the four external curves of \(\Gamma _{t,L}\) coincide with the one of the four external curves of \(\Gamma _{t,L'}\) for any \(t \in [0,\min \{T_L,T_{L'}\})\) (in particular the evolution of such curves is independent of the length of the central curve).

It follows that for \(L>{\overline{L}}\) the length of the central curve is always bounded away from zero and then the above observations imply that \(T_L=+\infty \) and the flow smoothly converges in infinite time to a minimal network.

On the contrary, if \(L\in (0,{\overline{L}})\), the length of the central curve vanishes in finite time, leading to a topological singularity of the flow in finite time.

We conclude this introduction by mentioning some further contributions related to the topic.

The use of Łojasiewicz–Simon inequalities has become a prominent tool for understanding stability properties and convergence of geometric flows. Apart from the above mentioned references, let us also recall the recent results on the uniqueness of blow-ups for the mean curvature flow [11, 12, 57], and the application of the method to constrained high order extrinsic flows [25, 55, 56]. The use of these inequalities has seen successful applications also in the context of intrinsic geometric flows, namely in [6, 22].

Apart from stability results, which is the main focus of the current paper, there are several questions concerning the motion by curvature of networks: study of singularities, global existence, extension to class of weaker objects. As we said above there is an extensive amount of literature concerning the analysis of the flow in the framework of classical PDEs (see [43] and references within). There are also several generalized weak notions of the flow, for instance [3, 21, 32, 35, 59]. Recently interesting progresses has been made both in the direction of proving regularity of weak solution [32, 33] and in establishing the so-called weak-strong uniqueness theorem [23, 27].

Worth to mention the fact that the motion by curvature of network was first proposed for modelling reasons [49] and recently has again attracted the applied mathematical community [2, 19, 20, 31].

Organization. In Sect. 2 we collect basic definitions and results on networks and on the motion by curvature. In Sect. 3.3 we establish the graph parametrization of networks over minimal ones and we prove the Łojasiewicz–Simon inequality implying Theorem 1.1. In Sect. 4 we prove that minimal networks locally minimize the length in \(C^0\). Section 5 is devoted to the proof of the stability of minimal networks, implying Theorem 1.2. In Sect. 6 we prove Theorem 1.3 by analyzing the motion of networks like the one in Fig. 1. In Appendix A we collect some tools needed in the proofs, namely a well known quantitative implicit function theorem and a monotonicity-type formula. In Appendix B we discuss extensions of our results to the case of networks on Riemannian surfaces.

2 Preliminaries

2.1 Networks

For a regular curve \(\gamma :[0,1]\rightarrow {\mathbb {R}}^2\) of class \(H^2\), define

$$\begin{aligned} \tau _\gamma :=\frac{\gamma '}{|\gamma '|}, \quad \nu _\gamma :=\textrm{R}(\tau _\gamma ), \end{aligned}$$

the tangent and the normal vector, respectively, where \(\textrm{R}\) denotes counterclockwise rotation of \(\tfrac{\pi }{2}\). We define \(\,\mathrm ds_\gamma :=|\gamma '| \,\mathrm dx\) the arclength element and \(\partial _s:=|\gamma '|^{-1}\partial _x\) the arclength derivative. The curvature of \(\gamma \) is the vector

$$\begin{aligned} {\varvec{k}}_\gamma :=\partial _s^2 \gamma . \end{aligned}$$

We shall usually drop the subscript \(\gamma \) when there is no risk of confusion.

Fix \(N\in {\mathbb {N}}\) and let \(i\in \{1,\ldots , N\}\), \(E^i:=[0,1]\times \{i\}\), \(E:=\bigcup _{i=1}^N E^i\) and \(V:=\bigcup _{i=1}^N \{0,1\}\times \{i\}\).

Definition 1.5

Let \(\sim \) be an equivalence relation that identifies points of V. A graph G is the topological quotient space of E induced by \(\sim \), that is

$$\begin{aligned} G:=E\Big {/}\sim , \end{aligned}$$

and we assume that G is connected.

Definition 1.6

A (planar) network is a pair \({\mathcal {N}}=(G,\Gamma )\) where

$$\begin{aligned} \Gamma : G\rightarrow {\mathbb {R}}^2 \end{aligned}$$

is a continuous map and G is a graph. We say that \({\mathcal {N}}\) is of class \(W^{k,p}\) (resp. \(C^{k,\alpha }\)) if each map \(\gamma ^i:=\Gamma _{\vert E^i}\) is either a constant map (singular curve) or a regular curve of class \(W^{k,p}\) (resp. \(C^{k,\alpha }\) up to the boundary). A network is smooth if it is of class \(C^\infty \). A network is degenerate if there is at least one singular curve.

Denoting by \(\pi :E\rightarrow G\) the projection onto the quotient, an endpoint is a point \(p \in G\) such that \(\pi ^{-1}(p) \subset V\) and it is a singleton, a junction is a point \(m \in G\) such that \(\pi ^{-1}(m) \subset V\) and it is not a singleton. The order of a junction if the cardinality \(\sharp \pi ^{-1}(m)\).

We denote by \(J_G\), and by \(P_G\) respectively, the set of junctions, and endpoints respectively, of a graph G. A graph G is said to be regular if each junction has order 3.

Without loss of generality, if \({\mathcal {N}}=(G,\Gamma )\) is a network and \(p \in G\) is an endpoint with \(\pi ^{-1}(p)=(e,i)\), we will implicitly assume that \(e=1\).

Definition 1.7

Let \({\mathcal {N}}=(G,\Gamma )\) be a network of class \(C^1\) and let \(e \in \{0,1\}\). The inner tangent vector of a regular curve \(\gamma ^i\) of \({\mathcal {N}}\) at e is the vector

$$\begin{aligned} (-1)^{e} \frac{(\gamma ^i)'(e)}{|(\gamma ^i)'(e)|}. \end{aligned}$$

In this paper we will be interested in the classes of triple junctions networks and regular networks.

Definition 1.8

A network \({\mathcal {N}}=(G,\Gamma )\) is a triple junctions network if G is regular and each map \(\gamma ^i:=\Gamma _{\vert E^i}\) is a regular embedding of class \(C^1\), for every \(i\ne j\) the curves \(\gamma ^i\) and \(\gamma ^j\) do not intersect in their interior, and \(\pi (0,i)\ne \pi (1,i)\) for any i.

A network \({\mathcal {N}}=(G,\Gamma )\) is said to be regular if it is a triple junctions network such that whenever \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) is a junction then any two inner tangent vectors of \(\gamma ^i,\gamma ^j,\gamma ^k\) at \(e^i,e^j,e^k\), respectively, form an angle equal to \(\tfrac{2}{3} \pi \).

A network \({\mathcal {N}}=(G,\Gamma )\) is said to be minimal if it is regular and the curvature of the parametrization of each edge is identically zero. Moreover, we assume that the parametrizations of minimal networks have constant speed.

We shall usually denote a network by directly writing the map \(\Gamma :G\rightarrow {\mathbb {R}}^2\).

2.2 Function spaces

We introduce the space \(W^{1,2}_p\), which is a natural choice to define the motion by curvature of networks.

For \(T>0\), \(N \in {\mathbb {N}}\) with \(N\ge 1\), \(p \in (3,\infty )\), we define

$$\begin{aligned} W^{1,2}_p\left( (0,T)\times (0,1); {\mathbb {R}}^{2N} \right)&:=W^{1,p}\left( (0,T); L^p\left( (0,1);{\mathbb {R}}^{2N}\right) \right) \\&\quad \cap L^p\left( (0,T); W^{2,p}\left( (0,1);{\mathbb {R}}^{2N}\right) \right) . \end{aligned}$$

Remark 2.5

Elements in the space \(W^{1,2}_p\) are functions \(f\in L^p\left( (0,T); L^p(0,1)\right) \) possessing one distributional derivative with respect to time \(\partial _t f\in L^p\left( (0,T); L^p(0,1)\right) \). Furthermore, for almost every \(t\in (0,T)\), the function f(t) lies in \(W^{2,p}(0,1)\) and thus has two spacial derivatives \(\partial _x (f(t))\), \(\partial _x ^2\left( f(t)\right) \in L_p(0,1)\). One easily sees that the functions \(t\mapsto \partial _x^k(f(t))\) for \(k\in \{1,2\}\) lie in \(L_p\left( (0,T);L_p(0,1)\right) \).

The space \(W^{1,2}_p\) is defined as the intersection of two Bochner spaces, which are, in this case, Sobolev spaces of functions defined on a measure space with values in a Banach space. We remind that

$$\begin{aligned} \left\Vert f\right\Vert _{ L^p\left( (0,T);W^{2,p}((0,1);{\mathbb {R}}^{2N})\right) }:=\left\Vert \left\Vert f(\cdot )\right\Vert _{W^{2,p}((0,1);{\mathbb {R}}^{2N})}\right\Vert _{L_p\left( (0,T);{\mathbb {R}}\right) } \end{aligned}$$

and

$$\begin{aligned} \Vert f\Vert _{W^{1,p}\left( (0,T); L^p\left( (0,1);{\mathbb {R}}^{2N}\right) \right) }:= \left( \Vert f\Vert _{L^p((0,T);L^p((0,1);{\mathbb {R}}^{2N}))}^p+\Vert \partial _t f\Vert _{ L^p((0,T);L^p((0,1);{\mathbb {R}}^{2N}))}^p\right) ^{\nicefrac {1}{p}} \end{aligned}$$

Let \(\Gamma _t:[0,T)\times G\rightarrow {\mathbb {R}}^2\) be a time-dependent network parametrized by \((\gamma ^1_t,\ldots ,\gamma ^N_t)\) with \(\gamma ^i_t\in W^{1,2}_p\). We shall denote \((G,\Gamma _t)\) by the symbol \((\mathcal N_t)_t\) and

$$\begin{aligned} \Vert ({\mathcal {N}}_t)_t\Vert _{W^{1,2}_p} :=\bigg ( \int _0^T \Vert \partial _t \Gamma _t\Vert _{L^p}^p \,\mathrm dt \bigg )^{\frac{1}{p}} + \bigg (\int _0^T \Vert \Gamma _t\Vert _{W^{2,p}}^p \,\mathrm dt \bigg )^{\frac{1}{p}}, \end{aligned}$$

where

$$\begin{aligned} \Vert \partial _t \Gamma _t\Vert _{L^p}^p = \sum _i \Vert \partial _t \gamma ^i_t\Vert _{L^p(\textrm{d}x)}^p, \qquad \Vert \Gamma _t\Vert _{W^{2,p}}^p = \sum _i \Vert \gamma ^i_t\Vert _{W^{2,p}(\textrm{d}x)}^p. \end{aligned}$$

We also need to introduce a suitable space for initial data. Fixed \(p\in (3,\infty )\) the Sobolev–Slobodeckij space \(W^{2-\nicefrac {2}{p},p}\left( (0,1);{\mathbb {R}}^{2N}\right) \) is defined by

$$\begin{aligned} W^{2-\nicefrac {2}{p},p}\left( (0,1);{\mathbb {R}}^{2N}\right) :=\left\{ f\in W^{1,p}\left( (0,1);{\mathbb {R}}^{2N}\right) :\left[ \partial _x f\right] _{1-\nicefrac {2}{p},p}<\infty \right\} \, \end{aligned}$$

with

$$\begin{aligned} \left[ \partial _x f\right] _{1-\nicefrac {2}{p},p} :=\left( \int _{0}^{1}\int _{0}^{1}\frac{\left|\partial _x f(x)-\partial _x f(y)\right|^p}{|x-y|^{p-1}}\,\textrm{d}x\,\textrm{d}y\right) ^{\nicefrac {1}{p}}. \end{aligned}$$

We define the \(W^{2-\nicefrac {2}{p}, p}\)-norm of a network \(\Gamma _0\) by

$$\begin{aligned} \Vert \Gamma _0\Vert _{W^{2-\nicefrac {2}{p}, p}} :=\Vert \Gamma _0\Vert _{W^{1,p}} +\left[ \partial _x\Gamma _0\right] _{1-\nicefrac {2}{p},p}, \end{aligned}$$

where

$$\begin{aligned} \Vert \Gamma _0\Vert _{W^{1,p}} = \sum _i \Vert \gamma ^i_0\Vert _{W^{1,p}}, \qquad \left[ \partial _x\Gamma _0\right] _{1-\nicefrac {2}{p},p} = \sum _i \left[ \partial _x\gamma ^i_0\right] _{1-\nicefrac {2}{p},p}. \end{aligned}$$

Remark 2.6

The temporal trace of the space \(W^{1,2}_p\) is the space \(W^{2-\nicefrac {2}{p},p}\). Since we would like to set the problem in \(W^{1,2}_p\), it is then natural to chose \(W^{2-\nicefrac {2}{p},p}\) as the space for the initial data. Moreover we ask \(p\in (3,\infty )\) to write the Herring condition at the junction. Indeed, for any \(T>0\), \(p\in (3,\infty )\) and \(\alpha \in \left( 0,1-\nicefrac {3}{p}\right] \) we have the continuous embeddings

$$\begin{aligned} W_p^{1,2}\left( (0,T)\times (0,1);{\mathbb {R}}^{2N}\right)&\hookrightarrow C\left( [0,T];W^{2-\nicefrac {2}{p},p}\left( (0,1);{\mathbb {R}}^{2N}\right) \right) \\&\hookrightarrow C\left( [0,T];C^{1+\alpha }\left( [0,1];{\mathbb {R}}^{2N}\right) \right) . \end{aligned}$$

The first embedding follows from [15, Lemma 4.4], the second is an immediate consequence of the Sobolev Embedding Theorem. Again by the Sobolev Embedding Theorem for \(p\in (3,6)\) we have the compact embedding

$$\begin{aligned} H^2\left( (0,1);{\mathbb {R}}^{2N}\right) \hookrightarrow W^{2-\nicefrac {2}{p},p}\left( (0,1);{\mathbb {R}}^{2N}\right) . \end{aligned}$$

2.3 Motion by curvature of planar networks

In this section we introduce the basic definitions and known results on the motion by curvature. Let \(p\in (3,\infty )\) be fixed.

Definition 1.11

(Admissible initial datum) A network \({\mathcal {N}}_0=(G,\Gamma _0)\) is an admissible initial datum for the motion by curvature if it is regular network of class \(W^{2-\nicefrac {2}{p},p}\).

Definition 1.12

(Solutions to the motion by curvature) Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be an admissible initial datum with \(P^i=\Gamma _0(p^i)\in {\mathbb {R}}^2\). A one-parameter family of regular networks \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\), for \(t \in [0,T)\), is a solution to the motion by curvature with initial datum \(\Gamma _0\) if the parametrizations \(\gamma ^i_t\) of \(\Gamma _t\) satisfy

$$\begin{aligned} \begin{aligned} \gamma ^i_t(p)=P^i&\qquad \forall \, t \in [0,T), \,\,\forall \,p \text { endpoint of }G,\\ \left\langle \partial _t\gamma ^i(t,x),\nu ^i(t,x)\right\rangle \nu ^i(t,x) ={\varvec{k}}^i(t,x)&\qquad \text {for a.e. }t\in (0,T),\ x \in (0,1), \end{aligned} \end{aligned}$$
(2.1)

and the collection of parametrizations \((\gamma ^1_t,\ldots ,\gamma ^N_t)\) belongs to \(W^{1,2}_p\big ((0,T)\times (0,1); {\mathbb {R}}^{2N} \big )\), with \(\gamma ^i_t|_{t=0}=\gamma ^i_0\) for any i.

The solution is assumed to be maximal, i.e., we ask that it does not exist another solution defined on \([0,{\widetilde{T}})\) with \({\widetilde{T}}>T\).

Remark 2.9

We stress the fact that the evolving network must be regular for every time \(t\in [0,T)\). From the PDE point of view this means that the system (2.1) is a boundary value problem with coupled boundary conditions: whenever \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\) is a junction then for all \(t\in [0,T)\)

$$\begin{aligned}&\gamma ^i(t,e^i)=\gamma ^j(t,e^j)= \gamma ^\ell (t,e^\ell ),\\&(-1)^{e^i}\tau ^i(t,e^i)+(-1)^{e^j}\tau ^j(t,e^j)+(-1)^{e^k}\tau ^\ell (t,e^\ell )=0. \end{aligned}$$

We quote here a parade of results on the motion by curvature of network that are relevant for the sequel of this paper.

Theorem 1.14

(Short time existence and parabolic smoothing [26]) If \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) is an admissible initial datum, then there exists a solution \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) to the motion by curvature starting from \(\Gamma _0\), for \(t \in [0,T)\). The solution is unique up to reparametrizations, it is smooth on \([\varepsilon ,T-\varepsilon ]\times G\) for any \(\varepsilon >0\), and \(\gamma ^i_t\rightarrow \gamma ^i_0\) in \(C^1([0,1])\) for \(t\rightarrow 0^+\) for any i.

Moreover, for any \(c_1,c_2>0\), there are \(\tau =\tau (c_1,c_2),M=M(p,c_1,c_2)>0\) such that if

$$\begin{aligned} \min _{i\in \{1,\ldots ,N\},x\in [0,1]} |\partial _x \gamma ^i_0| \ge c_1\quad \text {and}\quad \Vert \Gamma _0\Vert _{W^{2-\nicefrac {2}{p}, p}} \le c_2, \end{aligned}$$

then \(T\ge \tau \) and the solution \({\mathcal {N}}\) satisfies \(\Vert ({\mathcal {N}}_t)_t\Vert _{W^{1,2}_p} \le M\) for any \(t \in [0,\tau ]\).

To show existence of solutions one finds a unique solution to the special flow, i.e., the evolution determined by the non-degenerate parabolic second order equation

$$\begin{aligned} \partial _t \gamma ^i_t = \frac{\partial ^2_x\gamma ^i_t}{|\partial _x\gamma ^i_t|^2}. \end{aligned}$$
(2.2)

Clearly \(\left\langle \partial _t \gamma ^i_t,\nu ^i \right\rangle \nu ^i= {\varvec{k}}^i\), so a solution to the special flow is in particular a solution to the motion by curvature. Uniqueness up to reparametrizations is obtained by showing that any solution of the network flow can be obtained as a reparametrization of the solution of the special flow.

Remark 2.11

Theorem 2.10 yields existence and uniqueness of a solution in the sense of Definition 2.8 starting from any regular network of class \(W^{2-\nicefrac {2}{p},p}\). This is a great advantage in comparison with the theory firstly developed in [4, 43], where an initial datum \(\Gamma _0\) parametrized by \(\gamma ^i_0\) is required not only to be regular and \(C^2\), but also suitably geometrically compatible, that is such that

$$\begin{aligned} {\varvec{k}}^i(p)=0, \qquad (-1)^{e_i}k^i(e^i)+(-1)^{e_j}k^j(e^j)+(-1)^{e_\ell }k^\ell (e^\ell )=0, \end{aligned}$$
(2.3)

at any endpoint \(p \in G\) and at any junction \(m = \pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\), where \(k^n\) is the oriented curvature of \(\gamma ^n\) for any n.

Remark 2.12

We stress that for any admissible initial datum in Theorem 2.10, the results in [26] imply that we can take as solution to the motion by curvature exactly the solution to the special flow. More precisely, for any positive time the parametrizations \(\gamma ^i_t\) of the solution verify the Eq. (2.2) with no need of reparametrize the solution or the initial datum. We will always assume that the solution to the motion by curvature satisfies (2.2) whenever nothing different is specified.

As a consequence, the solution satisfies the analytic compatibility conditions of every order (see [43, Definition 4.7, Definition 4.16]). In particular, the following compatibility conditions of order two hold:

$$\begin{aligned} \partial ^2_x\gamma ^k(p)=0, \qquad \frac{\partial ^2_x\gamma ^i(e^i)}{|\partial _x\gamma ^j(e^i)|^2} = \frac{\partial ^2_x\gamma ^j(e^j)}{|\partial _x\gamma ^j(e^j)|^2}, \end{aligned}$$
(2.4)

at any endpoint \(p \in G\) and at any junction \(m = \pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\).

The fact that for any positive times we can take a smooth solution to the special flow is a key point in our analysis, as it allows to apply the classical results and to use all the estimates derived in [40, 43, 44].

In the next statement we recall the possible singularities happening at a singular time.

Theorem 1.17

(Long time behavior [43]) Let \(({\mathcal {N}}_t)_t\) be a solution to the motion by curvature with initial datum \(\Gamma _0\) in the time interval [0, T). Then either

$$\begin{aligned} T=+\infty , \end{aligned}$$

or as \(t\rightarrow T\) at least one of the following happens:

  1. (i)

    the limit inferior of the length of at least one curve of the network is zero;

  2. (ii)

    the limit superior of the \(L^2\)-norm of the curvature is \(+\infty \).

As mentioned in the introduction, the possibilities listed in the above theorem are not mutually exclusive.

3 Łojasiewicz–Simon inequalities

3.1 Graph parametrization of regular networks

We will employ the following notation. Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network. Denote by \(\gamma ^i_*\) the parametrization of the i-th edge of \(\Gamma _*\), and by \(\tau ^i_*,\nu ^i_*\) the relative unit tangent and normal vectors. Whenever \(m:=\pi (e^i,i)=\pi (e^j,j)\) is a junction, denote

$$\begin{aligned} \alpha ^{ij}_m :=\langle \tau _*^i(e^i),\tau ^j_*(e^j)\rangle \qquad \beta ^{ij}_m :=\langle \tau ^i_*(e^i),\nu ^j_*(e^j)\rangle \end{aligned}$$
(3.1)

Observe that \(\alpha ^{ij}_m=\langle \nu ^i_*(e^i),\nu ^j_*(e^j)\rangle \), \(\alpha ^{ij}_m=\alpha ^{ji}_m\) and \(\beta ^{ij}_m=-\beta ^{ji}_m\).

We derive the necessary conditions holding at junctions for the parametrizations of a triple junctions network written as a graphs over a regular one.

Lemma 1.18

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network and \(\Gamma :G\rightarrow {\mathbb {R}}^2\) be a network such that at a junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) we have

$$\begin{aligned} \gamma ^\ell (e^\ell ) = \gamma ^\ell _*(e^\ell ) + {\textsf{N}}^\ell (e^\ell )\nu ^\ell _*(e^\ell ) + {\textsf{T}}^\ell (e^\ell )\tau ^\ell _*(e^\ell ),\quad \text {with}\; \ell \in \{i,j,k\} \end{aligned}$$

for some constants \({\textsf{N}}^\ell (e^\ell ),{\textsf{T}}^\ell (e^\ell ) \in {\mathbb {R}}\).

Then the following relations hold

$$\begin{aligned} \begin{aligned} {\textsf{T}}^i(e^i)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^j(e^j)\beta _m^{ij} + {\textsf{N}}^k(e^k)\alpha ^{ij}_m\beta _m^{jk} + {\textsf{N}}^i(e^i)\alpha ^{ij}_m\alpha ^{jk}_m\beta _m^{ki} \right) ,\\ {\textsf{T}}^j(e^j)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^k(e^k)\beta _m^{jk} + {\textsf{N}}^i(e^i)\alpha ^{jk}_m\beta _m^{ki} + {\textsf{N}}^j(e^j)\alpha ^{jk}_m\alpha ^{ki}_m\beta _m^{ij} \right) ,\\ {\textsf{T}}^k(e^k)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i)\beta _m^{ki} + {\textsf{N}}^j(e^j)\alpha ^{ki}_m\beta _m^{ij} + {\textsf{N}}^k(e^k)\alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk} \right) ,\\ \end{aligned} \end{aligned}$$
(3.2)
$$\begin{aligned} (-1)^{e^i}{\textsf{N}}^i(e^i) + (-1)^{e^j}{\textsf{N}}^j(e^j) + (-1)^{e^k}{\textsf{N}}^k(e^k) = 0. \end{aligned}$$
(3.3)

In particular

$$\begin{aligned} \begin{aligned} {\textsf{T}}^i(e^i)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i)\left( \alpha ^{ij}_m\alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\alpha ^{ij}_m\beta _m^{jk} \right) \right. \\&\left. \quad + {\textsf{N}}^j(e^j)\left( \beta _m^{ij} +(-1)^{1+e_j+e_k}\alpha ^{ij}_m\beta _m^{jk}\right) \right) ,\\ {\textsf{T}}^j(e^j)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i)\left( \alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\beta _m^{jk} \right) \right. \\&\left. \quad + {\textsf{N}}^j(e^j)\left( \alpha ^{jk}_m\alpha ^{ki}_m\beta _m^{ij} +(-1)^{1+e_j+e_k}\beta _m^{jk}\right) \right) ,\\ {\textsf{T}}^k(e^k)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i) \left( \beta _m^{ki} +(-1)^{1+e_i+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk} \right) \right. \\&\left. \quad + {\textsf{N}}^j(e^j)\left( \alpha ^{ki}_m\beta _m^{ij}+(-1)^{1+e_j+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk}\right) \right) . \end{aligned} \end{aligned}$$
(3.4)

Proof

Let \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) be a junction of \(\Gamma \). Then \(\gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\), that is

$$\begin{aligned} {\textsf{N}}^i(e^i)\nu ^i_*(e^i) + {\textsf{T}}^i(e^i)\tau ^i_*(e^i)&= {\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j), \end{aligned}$$
(3.5)
$$\begin{aligned} {\textsf{N}}^i(e^i)\nu ^i_*(e^i) + {\textsf{T}}^i(e^i)\tau ^i_*(e^i)&= {\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k). \end{aligned}$$
(3.6)

Multiplying (3.5) by \(\tau ^i_*(e^i)\) we get

$$\begin{aligned} {\textsf{T}}^i(e^i) = {\textsf{N}}^j(e^j)\beta _m^{ij} + {\textsf{T}}^j(e^j)\alpha ^{ij}_m. \end{aligned}$$
(3.7)

Hence analogously

$$\begin{aligned} \begin{aligned} {\textsf{T}}^j(e^j)&= {\textsf{N}}^k(e^k)\beta _m^{jk} + {\textsf{T}}^k(e^k)\alpha ^{jk}_m, \\ {\textsf{T}}^k(e^k)&= {\textsf{N}}^i(e^i)\beta _m^{ki} + {\textsf{T}}^i(e^i)\alpha ^{ki}_m. \end{aligned} \end{aligned}$$
(3.8)

Combining (3.7) and (3.8) we obtain

$$\begin{aligned} \begin{aligned} {\textsf{T}}^i(e^i)&= {\textsf{N}}^j(e^j)\beta _m^{ij} + \alpha ^{ij}_m \left[ {\textsf{N}}^k(e^k)\beta _m^{jk} + {\textsf{T}}^k(e^k)\alpha ^{jk}_m\right] \\&= {\textsf{N}}^j(e^j)\beta _m^{ij} + \alpha ^{ij}_m \left[ {\textsf{N}}^k(e^k)\beta _m^{jk} + \alpha ^{jk}_m\left( {\textsf{N}}^i(e^i)\beta _m^{ki} + {\textsf{T}}^i(e^i)\alpha ^{ki}_m \right) \right] ,\\&= {\textsf{N}}^j(e^j)\beta _m^{ij} + {\textsf{N}}^k(e^k)\alpha ^{ij}_m\beta _m^{jk} + {\textsf{N}}^i(e^i)\alpha ^{ij}_m\alpha ^{jk}_m\beta _m^{ki} + {\textsf{T}}^i(e^i)\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m \end{aligned} \end{aligned}$$

that gives the first line in (3.2). The remaining identities in (3.2) follow analogously.

Denoting by \(p=\Gamma (m)= \gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\) the image of the junction, since

$$\begin{aligned} (-1)^{e_i}\nu ^i_*(e^i) + (-1)^{e_j}\nu ^j_*(e^j) + (-1)^{e_k}\nu ^k_*(e^k) =0, \end{aligned}$$

we get

$$\begin{aligned} \begin{aligned} 0&= \langle p,(-1)^{e_i}\nu ^i_*(e^i) + (-1)^{e_j}\nu ^j_*(e^j) + (-1)^{e_k}\nu ^k_*(e^k) \rangle \\&= \langle \gamma ^i(e^i), (-1)^{e_i}\nu ^i_*(e^i)\rangle + \langle \gamma ^j(e^j), (-1)^{e_j}\nu ^j_*(e^j)\rangle + \langle \gamma ^k(e^k), (-1)^{e_k}\nu ^k_*(e^k)\rangle \\&= \langle \Gamma _*(m),(-1)^{e_i}\nu ^i_*(e^i) + (-1)^{e_j}\nu ^j_*(e^j) + (-1)^{e_k}\nu ^k_*(e^k) \rangle \\&\quad + (-1)^{e^i}{\textsf{N}}^i(e^i) + (-1)^{e^j}{\textsf{N}}^j(e^j) + (-1)^{e^k}{\textsf{N}}^k(e^k) \\&= (-1)^{e^i}{\textsf{N}}^i(e^i) + (-1)^{e^j}{\textsf{N}}^j(e^j) + (-1)^{e^k}{\textsf{N}}^k(e^k), \end{aligned} \end{aligned}$$

that is (3.3). Now plugging (3.3) into (3.2) readily implies (3.4). \(\square \)

If now \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) is regular, in the next lemma we state the sufficient conditions that functions \({\textsf{N}}^\ell , {\textsf{T}}^\ell \) may satisfy to define a triple junctions network as a graph over \(\Gamma _*\).

Lemma 1.19

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network. Then there exists \(\varepsilon _{\Gamma _*}>0\) such that for every \({\textsf{N}}^\ell ,{\textsf{T}}^\ell \in C^1([0,1])\), with \(\Vert {\textsf{N}}^\ell \Vert _{C^1}, \Vert {\textsf{T}}^\ell \Vert _{C^1} \le \varepsilon _{\Gamma _*}\) fulfilling at any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) the identities

$$\begin{aligned} {\textsf{T}}^i(e^i)= & {} \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i)\left( \alpha ^{ij}_m\alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\alpha ^{ij}_m\beta _m^{jk} \right) \right. \\{} & {} \left. \quad + {\textsf{N}}^j(e^j)\left( \beta _m^{ij} +(-1)^{1+e_j+e_k}\alpha ^{ij}_m\beta _m^{jk}\right) \right) ,\\ {\textsf{T}}^j(e^j)= & {} \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i)\left( \alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\beta _m^{jk} \right) \right. \\{} & {} \left. \quad + {\textsf{N}}^j(e^j)\left( \alpha ^{jk}_m\alpha ^{ki}_m\beta _m^{ij} +(-1)^{1+e_j+e_k}\beta _m^{jk}\right) \right) ,\\ {\textsf{T}}^k(e^k)= & {} \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( {\textsf{N}}^i(e^i) \left( \beta _m^{ki} +(-1)^{1+e_i+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk} \right) \right. \\{} & {} \left. \quad + {\textsf{N}}^j(e^j)\left( \alpha ^{ki}_m\beta _m^{ij}+(-1)^{1+e_j+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk}\right) \right) , \end{aligned}$$
$$\begin{aligned} (-1)^{e^i}{\textsf{N}}^i(e^i) + (-1)^{e^j}{\textsf{N}}^j(e^j) + (-1)^{e^k}{\textsf{N}}^k(e^k) = 0, \end{aligned}$$

the maps

$$\begin{aligned} \gamma ^\ell (x) :=\gamma ^\ell _*(x) + {\textsf{N}}^\ell (x)\nu ^\ell _*(x) + {\textsf{T}}^\ell (x)\tau ^\ell _*(x), \end{aligned}$$

define a triple junctions network.

Proof

It is sufficient to check that whenever \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) is a junction, then \(\gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\), namely, we have to check that the three vectors \({\textsf{N}}^i(e^i)\nu ^i_*(e^i) + {\textsf{T}}^i(e^i)\tau ^i_*(e^i)\), \({\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j)\) and \({\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k)\) coincide. To this aim, observe that the identities in the assumptions imply that \({\textsf{T}}^i(e^i), {\textsf{T}}^j(e^j), {\textsf{T}}^k(e^k)\) satisfy (3.2). Taking scalar products of \({\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j)\) and \({\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k)\) with \(\tau ^i_*(e^i)\) and \(\nu ^i_*(e^i)\), exploiting (3.2) one easily checks that

$$\begin{aligned} \begin{aligned}&\langle {\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j), \tau ^i_*(e^i)\rangle \\&\quad = \langle {\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k), \tau ^i_*(e^i)\rangle = {\textsf{T}}^i(e^i),\\&\langle {\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j), \nu ^i_*(e^i)\rangle \\&\quad = \langle {\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k), \nu ^i_*(e^i)\rangle = {\textsf{N}}^i(e^i). \end{aligned} \end{aligned}$$

\(\square \)

The previous Lemmas 3.1 and 3.2 motivate the following definition.

Definition 1.20

Let G be a regular graph. At any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), if \(i<j<k\), we denote by \(L^i,L^j,L^k\) the linear maps

$$\begin{aligned} \begin{aligned} L^i(a,b)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( \left( \alpha ^{ij}_m\alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\alpha ^{ij}_m\beta _m^{jk} \right) a \right. \\&\left. \quad + \left( \beta _m^{ij} +(-1)^{1+e_j+e_k}\alpha ^{ij}_m\beta _m^{jk}\right) b\right) ,\\ L^j(a,b)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( \left( \alpha ^{jk}_m\beta _m^{ki} +(-1)^{1+e_i+e_k}\beta _m^{jk} \right) a\right. \\&\left. \quad + \left( \alpha ^{jk}_m\alpha ^{ki}_m\beta _m^{ij} +(-1)^{1+e_j+e_k}\beta _m^{jk}\right) b\right) , \\ L^k(a,b)&= \frac{1}{1-\alpha ^{ij}_m\alpha ^{jk}_m\alpha ^{ki}_m} \left( \left( \beta _m^{ki} +(-1)^{1+e_i+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk} \right) a\right. \\&\left. \quad + \left( \alpha ^{ki}_m\beta _m^{ij}+(-1)^{1+e_j+e_k} \alpha ^{ki}_m\alpha ^{ij}_m\beta _m^{jk}\right) b \right) , \end{aligned} \end{aligned}$$

for any \((a,b)\in {\mathbb {R}}^2\). Moreover, we denote by \(I_m\) the set of indices \(\ell \) such that \(E^\ell \) has an endpoint at m, and we denote by \(e^\ell _m\in \{0,1\}\) the endpoint of \(E^\ell \) at m, for \(\ell \in I_m\).

Furthermore, for \(\ell \in I_m\), we denote by \({\mathscr {L}}^\ell _m\) the linear operator \(L^i,L^j,L^k\), depending on whether \(\ell \) is the minimal, intermediate, or maximal index \(\ell \in I_m\).

Finally, for any endpoint p, we denote by \(i_p\) the corresponding index such that p is an endpoint of \(E^{i_p}\).

We are ready for proving the existence of a canonical graph parametrization of triple junctions networks over regular ones that are close in \(H^2\)-norm. As discussed in the introduction, we shall perform the construction fixing a dependence of the tangential component of the graph with respect to the normal one. Such dependence is naturally defined by suitably extending to the interior of the edges the relations we found holding at junctions in Lemmas 3.1 and 3.2.

For this purpose, from now on and for the rest of the paper, we fix a nonincreasing smooth cut-off function

$$\begin{aligned} \chi :[0,1/2]\rightarrow [0,1], \quad \chi |_{[0,\frac{1}{8}]}=1, \quad \chi |_{[\frac{3}{8},\frac{1}{2}]}\equiv 0. \end{aligned}$$

Proposition 1.21

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there exists \(\varepsilon _{\Gamma _*}>0\) such that whenever \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a triple junctions network of class \(H^2\) such that

$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \varepsilon _{\Gamma _*},&\end{aligned}$$
(3.9)
$$\begin{aligned} \gamma ^i_*(p)=\gamma ^i(p)&\qquad \forall \,p\in G\ :\ p\text { is an endpoint,} \end{aligned}$$
(3.10)

for any i, then there exist functions \({\textsf{N}}^i, {\textsf{T}}^i \in H^2(\textrm{d}x)\) and reparametrizations \(\varphi ^i:[0,1]\rightarrow [0,1]\) of class \(H^2(\textrm{d}x)\) such that

$$\begin{aligned} \gamma ^i\circ \varphi ^i(x) = \gamma ^i_*(x) + {\textsf{N}}^i(x)\nu ^i_*(x) + {\textsf{T}}^i(x)\tau ^i_*(x). \end{aligned}$$

At any junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), there holds

$$\begin{aligned} \begin{aligned} {\textsf{T}}^i(|e^i-x|)&= \chi (x) L^i({\textsf{N}}^i (|e^i-x|), {\textsf{N}}^j(|e^j-x|) ) , \\ {\textsf{T}}^j(|e^j-x|)&= \chi (x) L^j({\textsf{N}}^i(|e^i-x|) , {\textsf{N}}^j(|e^j-x|)), \\ {\textsf{T}}^k(|e^k-x|)&= \chi (x) L^k({\textsf{N}}^i(|e^i-x|), {\textsf{N}}^j(|e^j-x|) ) , \end{aligned} \end{aligned}$$
(3.11)

for \(x \in [0,\tfrac{1}{2}]\).

If \(\pi (1,i)\) is an endpoint, then

$$\begin{aligned} {\textsf{T}}^i(x)=0, \end{aligned}$$
(3.12)

for \(x \in [\tfrac{1}{2},1]\).

Moreover

  • for any \(\delta >0\) there is \(\varepsilon \in (0,\varepsilon _{\Gamma _*})\) such that

    $$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \varepsilon \quad \implies \quad \sum _i \Vert {\textsf{N}}^i\Vert _{H^2(\textrm{d}x)}+ \Vert \varphi ^i(x)-x\Vert _{H^2(\textrm{d}x)} \le \delta ; \end{aligned}$$
    (3.13)
  • for any \(\eta >0\) and \(m\in {\mathbb {N}}\) there is \(\varepsilon _{\eta ,m}\in (0,\varepsilon _{\Gamma _*})\) such that if \(\sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{C^{m+1}([0,1])} \le \varepsilon _{\eta ,m}\), then

    $$\begin{aligned} \sum _i \Vert {\textsf{N}}^i\Vert _{H^m(\textrm{d}x)} \le \eta . \end{aligned}$$
    (3.14)

Proof

Without loss of generality, we can perform the construction at a junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), assuming \(e^i=e^j=e^k=0\). For the sake of clarity, we show how to construct \(\varphi ^i,{\textsf{N}}^i\) and \(\varphi ^j,{\textsf{N}}^j\) on \([0,\frac{1}{2}]\) only, the complete proof being a straightforward adaptation.

Consider the function \(F:[0,\frac{1}{2}]\times {\mathbb {R}}^2\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}^4\) given by

$$\begin{aligned} F(x,n^i,y^i,n^j,y^j):=\begin{pmatrix} \gamma ^i_*(x) + n^i\nu ^i_*(x) + \chi (x) L^i(n^i,n^j) \tau ^i_*(x) - \gamma ^i(y^i) \\ \gamma ^j_*(x) + n^j\nu ^j_*(x) + \chi (x) L^j(n^i,n^j) \tau ^j_*(x) - \gamma ^j(y^j) \end{pmatrix}. \end{aligned}$$

Reflecting by symmetry, we can assume that F is also defined in an open neighborhood of \(x=0\). Since \(\{\tau ^i_*(0),\nu ^i_*(0)\}\) (and \(\{\tau ^j_*(0),\nu ^j_*(0)\}\)) is a basis of \({\mathbb {R}}^2\), there exist unique numbers \({\textsf{N}}^i(0),{\textsf{T}}^i(0)\) (and \({\textsf{N}}^j(0),{\textsf{T}}^j(0)\)) such that \(\gamma ^i(0) = \gamma ^i_*(0) + {\textsf{N}}^i(0)\nu ^i_*(0) + {\textsf{T}}^i(0)\tau ^i_*(0)\) (and \(\gamma ^j(0) = \gamma ^j_*(0) + {\textsf{N}}^j(0)\nu ^j_*(0) + {\textsf{T}}^j(0)\tau ^j_*(0)\)). Since \(\Gamma \) is a triple junctions network, by Lemma 3.1, we have that \(F(0,{\textsf{N}}^i(0),0,{\textsf{N}}^j(0),0)=(0)\). Moreover, the matrix

$$\begin{aligned} \begin{aligned}&M(x,n^i,y^i,n^j,y^j) \\&\quad :=\left( \begin{array}{c|c|c|c} \partial _{n^i} F&\partial _{y^i} F&\partial _{n^j} F&\partial _{y^j} F \end{array}\right) \big |_{(x,n^i,y^i,n^j,y^j)} \\&\quad = \left( \begin{array}{c|c|c|c} \nu ^i_*(x) + \chi (x) L^i(1,0)\tau ^i_*(x) &{} -(\gamma ^i)'(y^i)&{} \chi (x)L^i(0,1)\tau ^i_*(x) &{} 0 \\ \chi (x)L^j(1,0)\tau ^j_*(x) &{} 0 &{} \nu ^j_*(x) + \chi (x) L^j(0,1)\tau ^j_*(x) &{} -(\gamma ^j)'(y^j) \end{array}\right) \end{aligned} \end{aligned}$$

satisfies

$$\begin{aligned}{} & {} M(0,{\textsf{N}}^i(0),0,{\textsf{N}}^j(0),0)\\{} & {} \quad = \left( \begin{array}{c|c|c|c} \nu ^i_*(0) + L^i(1,0)\tau ^i_*(0) &{} -(\gamma ^i)'(0)&{} L^i(0,1)\tau ^i_*(0) &{} 0 \\ L^j(1,0)\tau ^j_*(0) &{} 0 &{} \nu ^j_*(0) + L^j(0,1)\tau ^j_*(0) &{} -(\gamma ^j)'(0) \end{array}\right) . \end{aligned}$$

It is readily checked that any two columns are linearly independent. Hence we can apply the implicit function theorem to get the existence of \(\varphi ^i,{\textsf{N}}^i\) and \(\varphi ^j,{\textsf{N}}^j\) defined on some interval \([0,\xi ]\subset [0,1/2]\) such that

$$\begin{aligned} \begin{aligned} \gamma ^i(\varphi ^i(x))&= \gamma ^i_*(x) + {\textsf{N}}^i(x)\nu ^i_*(x) + \chi (x) L^i({\textsf{N}}^i(x),{\textsf{N}}^j(x)) \tau ^i_*(x),\\ \gamma ^j(\varphi ^j(x))&= \gamma ^j_*(x) + {\textsf{N}}^j(x)\nu ^j_*(x) + \chi (x) L^j({\textsf{N}}^i(x),{\textsf{N}}^j(x)) \tau ^j_*(x). \end{aligned} \end{aligned}$$

From the identity

$$\begin{aligned} \begin{aligned} \begin{pmatrix} \partial _x {\textsf{N}}^i(x) \\ \partial _x \varphi ^i(x) \\ \partial _x {\textsf{N}}^j(x) \\ \partial _x \varphi ^j(x) \\ \end{pmatrix}&= -\left[ M\left( x, {\textsf{N}}^i(x),\varphi ^i(x), {\textsf{N}}^j(x), \varphi ^j(x)\right) \right] ^{-1} \cdot \partial _x F\big |_{(x, {\textsf{N}}^i(x),\varphi ^i(x), {\textsf{N}}^j(x), \varphi ^j(x))} \\&= -\left[ M\left( x, {\textsf{N}}^i(x),\varphi ^i(x), {\textsf{N}}^j(x), \varphi ^j(x)\right) \right] ^{-1}\\&\quad \cdot \begin{pmatrix} (\gamma ^i_*)' + {\textsf{N}}^i \partial _x \nu ^i_* + [\chi ' \tau ^i_* + \chi \partial _x \tau ^i_*] L^i({\textsf{N}}^i, {\textsf{N}}^j) \\ (\gamma ^j_*)' + {\textsf{N}}^j \partial _x \nu ^j_* + [\chi ' \tau ^j_* + \chi \partial _x \tau ^j_*] L^j({\textsf{N}}^i, {\textsf{N}}^j) \end{pmatrix} \end{aligned} \end{aligned}$$
(3.15)

we estimate

$$\begin{aligned} \begin{aligned}&\Vert \partial _x {\textsf{N}}^i(x) \Vert _{L^\infty (0,\xi )} + \Vert \partial _x \varphi ^i(x) \Vert _{L^\infty (0,\xi )} + \Vert \partial _x {\textsf{N}}^j(x) \Vert _{L^\infty (0,\xi )} \\&\qquad + \Vert \partial _x \varphi ^j(x)\Vert _{L^\infty (0,\xi )} \\&\quad \le C(\Gamma _*, \Vert \partial _x \gamma ^i\Vert _\infty , \Vert \partial _x \gamma ^i\Vert _\infty ) \left( 1 + \Vert {\textsf{N}}^i(x) \Vert _{L^\infty (0,\xi )} + \Vert {\textsf{N}}^j(x) \Vert _{L^\infty (0,\xi )}\right) \\&\quad {\mathop {\le }\limits ^{(3.9)}} C(\Gamma _*, \varepsilon _{\Gamma _*}) \left( 1 + \Vert {\textsf{N}}^i(x) \Vert _{L^\infty (0,\xi )} + \Vert {\textsf{N}}^j(x) \Vert _{L^\infty (0,\xi )}\right) . \end{aligned} \end{aligned}$$

Since \({\textsf{N}}^i = \langle \gamma ^i\circ \varphi ^i -\gamma ^i_*, \nu ^i_*\rangle \), and analogously for j, recalling (3.9) we get

$$\begin{aligned} \begin{aligned}&\Vert \partial _x {\textsf{N}}^i(x) \Vert _{L^\infty (0,\xi )} + \Vert \partial _x \varphi ^i(x) \Vert _{L^\infty (0,\xi )} + \Vert \partial _x {\textsf{N}}^j(x) \Vert _{L^\infty (0,\xi )} + \Vert \partial _x \varphi ^j(x)\Vert _{L^\infty (0,\xi )} \\&\quad \le C(\Gamma _*, \varepsilon _{\Gamma _*}). \end{aligned} \end{aligned}$$
(3.16)

We claim that

$$\begin{aligned} \begin{aligned}&\forall \,\delta>0 \,\exists \,\varepsilon \in (0,\varepsilon _{\Gamma _*}) \ :\ \sum _i\Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \varepsilon \\&\quad \implies \quad \exists \, {\overline{\xi }}(\varepsilon )>0\,\text { st }\xi \ge {\overline{\xi }}, \Vert \varphi ^i(x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} \le \delta . \end{aligned} \end{aligned}$$
(3.17)

Indeed, suppose by contradiction that there is \(\delta >0\) and a sequence of triple junctions networks \(\Gamma _n\) such that \(\sum _i\Vert \gamma ^i_* - \gamma ^i_n \Vert _{H^2(\textrm{d}x)} \le 1/n \), but the implicit functions \(\varphi ^i_n:[0,\xi _n]\rightarrow [0,1/2]\) obtained as above do not verify (3.17). Denote by \(F_n,M_n, {\textsf{N}}^i_n,{\textsf{N}}^j_n\) the map, the matrices, and the functions defined by the above procedure applied on the network \(\Gamma _n\) in place of \(\Gamma \). Since \(\sum _i\Vert \gamma ^i_* - \gamma ^i_n \Vert _{H^2(\textrm{d}x)} \le 1/n \), there are \(S,\rho >0\) independent of n such that

$$\begin{aligned}{} & {} \Vert [M_n(0,{\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)]^{-1}]\Vert \le S, \\{} & {} \left\| \textrm{id} - [M_n(0,{\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)]^{-1} M_n(x,n^i,y^i,n^j,y^j) \right\| \le \frac{1}{2}, \end{aligned}$$

whenever \(|x|<\rho \) and \(|(n^i,y^i,n^j,y^j) - ({\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)|<\rho \). Furthermore, since \(\partial _x F_n (x,n^i,y^i,n^j,y^j)\) does not depend on n, there is \(N>0\) such that

$$\begin{aligned} \Vert \partial _x F_n (x,n^i,y^i,n^j,y^j)\Vert \le N, \end{aligned}$$

whenever \(|x|<\rho \) and \(|(n^i,y^i,n^j,y^j) - ({\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)|<\rho \), for any n. Hence the assumptions of Theorem A.1 are satisfied, and thus there is \({{\overline{\xi }}}>0\) such that \(\xi _n\ge {{\overline{\xi }}}\) for any n. Then it must be that \(\Vert \varphi ^i(x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} > \delta \).

Up to subsequence, recalling the uniform bounds (3.16), we can pass to the limit \(n\rightarrow \infty \) in the identity

$$\begin{aligned} \gamma ^i_n(\varphi ^i_n(x)) = \gamma ^i_*(x) + {\textsf{N}}^i_n(x)\nu ^i_*(x) + \chi (x) L^i({\textsf{N}}^i_n(x),{\textsf{N}}^j_n(x)) \tau ^i_*(x), \end{aligned}$$

to obtain

$$\begin{aligned} \gamma ^i_*(\varphi ^i_\infty (x)) = \gamma ^i_*(x) + {\textsf{N}}^i_\infty (x)\nu ^i_*(x) + \chi (x) L^i({\textsf{N}}^i_\infty (x),{\textsf{N}}^j_\infty (x)) \tau ^i_*(x), \end{aligned}$$
(3.18)

where \(\varphi ^i_n\rightarrow \varphi ^i_\infty \), \({\textsf{N}}^i_n\rightarrow {\textsf{N}}^i_\infty \), and \({\textsf{N}}^j_n\rightarrow {\textsf{N}}^j_\infty \) in \(C^0([0,{{\overline{\xi }}}])\) and in \(H^1(0,{{\overline{\xi }}})\), and (3.18) holds pointwise on \([0,{{\overline{\xi }}}]\). By the uniqueness part of the implicit function theorem, we deduce that \(\varphi ^i_\infty (x)\equiv x\), \({\textsf{N}}^i_\infty (x)\equiv 0\), and \({\textsf{N}}^j_\infty (x)\equiv 0\).

Moreover, by (3.15), uniform convergence on the right hand side implies that \(\varphi ^i_n(x)\rightarrow x\), \({\textsf{N}}^i_n\rightarrow 0\), and \({\textsf{N}}^j_n\rightarrow 0\) in \(C^1([0,{{\overline{\xi }}}])\).

Hence \(0=\Vert \varphi ^i_\infty (x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} =\lim _n\Vert \varphi ^i_n(x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} >\delta \) gives a contradiction, and (3.17) follows.

By (3.17), up to decreasing \(\varepsilon _{\Gamma _*}\), then \((\varphi ^i)'\ge \tfrac{1}{2}\) on \((0,{{\overline{\xi }}})\) for any i. Hence further differentiating (3.15) and arguing as before, we also derive that

$$\begin{aligned}{} & {} \forall \,\delta>0 \,\exists \,\varepsilon \in (0,\varepsilon _{\Gamma _*}) \ :\ \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \varepsilon \nonumber \\{} & {} \quad \implies \exists \, {\overline{\xi }}(\varepsilon )>0\,\text { st }\,\, \xi \ge {\overline{\xi }}, \quad \Vert {\textsf{N}}^i\Vert _{H^2(0,{{\overline{\xi }}})}+ \Vert \varphi ^i(x)-x\Vert _{H^2(0,{{\overline{\xi }}})} \le \delta . \qquad \end{aligned}$$
(3.19)

Since \({\overline{\xi }}\) only depends on \(\varepsilon _{\Gamma _*}\), we can iterate finitely many times the above argument to get the complete construction on the interval \([0,\frac{1}{2}]\) as claimed.

Moreover, (3.19) eventually implies (3.13). Similarly, further differentiating (3.15) leads to (3.14). \(\square \)

Arguing as in Proposition 3.4, one obtains the following analogous consequence for the parametrization of a time dependent family of networks in a neighborhood of a fixed one \(\Gamma _*\).

Corollary 1.22

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there exist \(\varepsilon _{\Gamma _*}>0\) such that whenever \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) is a one-parameter family of triple junctions networks of class \(H^2\), differentiable with respect to t for \(t \in [t_0-h,t_0+h]\) and \(h>0\), such that \((t,x)\mapsto \partial _t \gamma ^i_t(x)\) is continuous for any i and

$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i_t \Vert _{H^2(\textrm{d}x)} \le \varepsilon _{\Gamma _*},&\\ \gamma ^i_*(p)=\gamma ^i_t(p)&\qquad \forall \,p\in G\ :\ p\text { is an endpoint,} \end{aligned}$$

for any it, then there exist \(h'\in (0,h)\) and functions \({\textsf{N}}^i_t, {\textsf{T}}^i_t \in H^2(\textrm{d}x)\) and reparametrizations \(\varphi ^i_t:[0,1]\rightarrow [0,1]\) of class \(H^2(\textrm{d}x)\), continuously differentiable with respect to t for \(t \in [t_0-h',t_0+h']\) such that

$$\begin{aligned} \gamma ^i_t\circ \varphi ^i_t(x) = \gamma ^i_*(x) + {\textsf{N}}^i_t(x)\nu ^i_*(x) + {\textsf{T}}^i_t(x)\tau ^i_*(x). \end{aligned}$$

At any junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), there holds

$$\begin{aligned} \begin{aligned} {\textsf{T}}^i_t(|e^i-x|)&= \chi (x) L^i({\textsf{N}}^i_t (|e^i-x|), {\textsf{N}}^j_t(|e^j-x|) ) , \\ {\textsf{T}}^j_t(|e^j-x|)&= \chi (x) L^j({\textsf{N}}^i_t(|e^i-x|) , {\textsf{N}}^j_t(|e^j-x|)), \\ {\textsf{T}}^k_t(|e^k-x|)&= \chi (x) L^k({\textsf{N}}^i_t(|e^i-x|), {\textsf{N}}^j_t(|e^j-x|) ) , \end{aligned} \end{aligned}$$

for \(x \in [0,\tfrac{1}{2}]\).

If \(\pi (1,i)\) is an endpoint, then

$$\begin{aligned} {\textsf{T}}^i_t(x)=0, \end{aligned}$$

for \(x \in [\tfrac{1}{2},1]\).

Moreover

  • for any \(\delta >0\) there is \(\varepsilon \in (0,\varepsilon _{\Gamma _*})\) such that

    $$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i_t \Vert _{H^2(\textrm{d}x)} \le \varepsilon \quad \forall \,t \quad \implies \quad \sum _i \Vert {\textsf{N}}^i_t\Vert _{H^2(\textrm{d}x)}+ \Vert \varphi ^i_t(x)-x\Vert _{H^2(\textrm{d}x)} \le \delta , \end{aligned}$$
    (3.20)

    for any \(t \in [t_0-h',t_0+h']\);

  • for any \(\eta >0\) and \(m\in {\mathbb {N}}\) there is \(\varepsilon _{\eta ,m}\in (0,\varepsilon _{\Gamma _*})\) and \(h_{\eta ,m} \in (0,h)\) such that if \(\sum _i \Vert \gamma ^i_* - \gamma ^i_t \Vert _{C^{m+1}([0,1])} \le \varepsilon \) for any t, then

    $$\begin{aligned} \sum _i \Vert {\textsf{N}}^i_t\Vert _{H^m(\textrm{d}x)} \le \eta , \end{aligned}$$
    (3.21)

    for any \(t \in [t_0-h_{\eta ,m}, t_0 + h_{\eta ,m}]\).

The construction of the “tangent functions” \({\textsf{T}}^i\)’s in Proposition 3.4 and Corollary 3.5 depending on the “normal functions” \({\textsf{N}}^i\)’s motivates the next definition.

Definition 1.23

(Adapted tangent functions) Let G be a regular graph. Let \({\textsf{N}}^i,{\textsf{T}}^i:[0,1]\rightarrow {\mathbb {R}}\) be functions of class \(C^1\), for \(i=1,\ldots ,N\). We say that the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s whenever there hold the relations (3.11) and (3.12).

More explicitly, the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s whenever

$$\begin{aligned} {\textsf{T}}^\ell (|e^\ell _m - x|) = \chi (x) {\mathscr {L}}^\ell _m({\textsf{N}}^i(|e^i_m - x|), {\textsf{N}}^j(|e^j_m-x|) ), \end{aligned}$$
(3.22)

for \(x \in [0,\tfrac{1}{2}]\) for any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) with \(i<j<k\), and

$$\begin{aligned} {\textsf{T}}^i(x)=0 \end{aligned}$$

for \(x \in [\tfrac{1}{2},1]\) for any endpoint \(\pi (1,i)\).

3.2 First and second variations

In order to derive the desired Łojasiewicz–Simon inequality, we need to compute first and second variations of the length functional taking variations determined by graph parametrizations over regular networks with tangent functions adapted to normal functions as in Definition 3.6.

Proposition 1.24

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there is \(\varepsilon _{\Gamma _*}>0\) such that the following holds.

Let \({\textsf{N}}^i, X^i \in H^2\) with \(\Vert {\textsf{N}}^i\Vert _{H^2}\le \varepsilon _{\Gamma _*}\) such that

$$\begin{aligned} \sum _{\ell \in I_m} (-1)^{e^\ell _m} {\textsf{N}}^\ell (e^\ell _m) = \sum _{\ell \in I_m} (-1)^{e^\ell _m} X^\ell (e^\ell _m)=0 \qquad \forall m \in J_G. \end{aligned}$$

Let \(\Gamma ^\varepsilon :G\rightarrow {\mathbb {R}}^2\) be the triple junctions network defined by

$$\begin{aligned} \gamma ^{i,\varepsilon }(x) :=\gamma ^i_*(x) + ({\textsf{N}}^i(x)+ \varepsilon X^i(x))\nu ^i_*(x) + {\textsf{T}}^{i,\varepsilon }(x)\tau ^i_*(x), \end{aligned}$$
(3.23)

for any i, for any \(|\varepsilon |<\varepsilon _0\) and some \(\varepsilon _0>0\), where the \({\textsf{T}}^{i,\varepsilon }\)’s are adapted to the \(({\textsf{N}}^i+ \varepsilon X^i)\)’s, for any \(|\varepsilon |<\varepsilon _0\).Footnote 1

Call \(\Gamma \) the network given by the immersions \(\gamma ^i:=\gamma ^{i,0}\). Then

$$\begin{aligned}{} & {} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}} (\Gamma ^\varepsilon ) \bigg |_0 \nonumber \\{} & {} \quad = \sum _{p \in P_G} \langle \tau ^{i_p}(1), \nu ^{i_p}_*(1)\rangle X^{i_p}(1) \nonumber \\{} & {} \quad \quad + \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \left[ \langle \tau ^\ell (e^\ell _m), \nu ^\ell _*(e^\ell _m)\rangle + \sum _{j \in I_m} h_{\ell j}\langle \tau ^j(e^j_m) , \tau ^j_*(e^j_m)\rangle \right] X^\ell (e^\ell _m) \nonumber \\{} & {} \quad \quad - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| \nonumber \\{} & {} \quad \quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx, \end{aligned}$$
(3.24)

where \(f_{ij}, g_{ij}, h_{\ell j} \in {\mathbb {R}}\) depend on the topology of G.

If also \(\Gamma \) is regular and \(\gamma ^{i,\varepsilon }(p)=\gamma ^i_*(p) \) for any i at any endpoint p, then

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}} (\Gamma ^\varepsilon ) \bigg |_0&= - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| +\\&\quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx. \end{aligned} \end{aligned}$$
(3.25)

Proof

Let us assume first that there is a junction m such that the functions \(X^\ell \) appearing in (3.23) all vanish except for \(\ell \in I_m\). Moreover, for \(\ell \in I_m\), assume that \(e^\ell _m=0\) and that \(X^\ell \) has compact support in \([0,\tfrac{5}{8})\).

Let us denote \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\). By differentiating the length functional we get

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}} (\Gamma ^\varepsilon ) = \sum _{\ell \in I_m}\int _0^1 \frac{1}{|\partial _x \gamma ^{\ell ,\varepsilon }|} \langle \partial _x \gamma ^{\ell ,\varepsilon }, \partial _x \partial _\varepsilon \gamma ^{\ell ,\varepsilon }\rangle \,\mathrm dx, \end{aligned} \end{aligned}$$
(3.26)

indeed, since \(\textrm{spt}\,X^\ell \subset [0,\tfrac{5}{8})\), by (3.22) and definition of \(\chi \), we have that \({\textsf{T}}^{n,\varepsilon }\) does not depend on \(\varepsilon \) for all \(n \not \in I_m\). Moreover

$$\begin{aligned} \begin{aligned} {\textsf{T}}^{\ell ,\varepsilon }(x)&= \chi (x) {\mathscr {L}}^\ell _m(({\textsf{N}}^i+ \varepsilon X^i)(x), ({\textsf{N}}^j+\varepsilon X^j)(x) ) \\&= \chi (x) {\mathscr {L}}^\ell _m({\textsf{N}}^i(x), {\textsf{N}}^j(x) ) + \varepsilon \, \chi (x){\mathscr {L}}^\ell _m(X^i(x), X^j(x) ) \end{aligned} \end{aligned}$$
(3.27)

for \(\ell \in I_m\), hence, letting

$$\begin{aligned} Y^\ell :=\partial _\varepsilon \gamma ^{\ell ,\varepsilon } = X^\ell \nu ^\ell _* + \chi {\mathscr {L}}^\ell _m(X^i,X^j)\tau ^\ell _*, \end{aligned}$$
(3.28)

we find

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}} (\Gamma ^\varepsilon ) \bigg |_0&= \sum _{\ell \in I_m} \int _0^1 \frac{1}{|\partial _x \gamma ^\ell |} \langle \partial _x \gamma ^\ell , \partial _x Y^\ell \rangle \,\mathrm dx = \sum _{\ell \in I_m} \int _0^1 \langle \tau ^\ell , \partial _x Y^\ell \rangle \,\mathrm dx \\&=\sum _{\ell \in I_m} - \langle \tau ^\ell (0),Y^\ell (0)\rangle - \int _0^1 \langle {\varvec{k}}^\ell , Y^\ell \rangle \,\mathrm ds^\ell . \end{aligned} \end{aligned}$$

Since \(\gamma ^{\ell ,\varepsilon }(0)=\gamma ^{l,\varepsilon }(0)\) for any \(\varepsilon \) and \(\ell ,l \in I_m\), then \(Y^\ell (0)=Y^l(0)\) for any \(\ell ,l \in I_m\). Hence, if \(\Gamma \) is regular, then the boundary term \(\sum _{\ell \in I_m} \langle \tau ^\ell (0),Y^\ell (0)\rangle =0\).

Employing (3.28) we get

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}} (\Gamma ^\varepsilon ) \bigg |_0&= - \sum _{\ell \in I_m} \langle \tau ^\ell (0),\nu ^\ell _*(0)\rangle X^\ell (0) + {\mathscr {L}}^\ell _m(X^i(0),X^j(0)) \langle \tau ^\ell (0), \tau ^\ell _*(0)\rangle +\\&\quad - \sum _{\ell \in I_m} \int _0^1 \langle {\varvec{k}}^\ell , \nu ^\ell _*\rangle X^\ell + \langle {\varvec{k}}^\ell , \tau ^\ell _* \rangle \chi {\mathscr {L}}^\ell _m(X^i,X^j) \,\mathrm ds^\ell . \end{aligned} \end{aligned}$$

Suppose now that there is an endpoint \(p \in P_G\) such that the functions \(X^\ell \) appearing in (3.23) all vanish except for \(\ell = i_p\). Moreover, assume that \(X^{i_p}\) has compact support in \((\tfrac{3}{8},1]\). Hence \(Y^{i_p}:=\partial _\varepsilon \gamma ^{i_p,\varepsilon } = X^{i_p}\nu ^{i_p}_*\) in this case, and the same computation performed above now yields

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } {\textrm{L}}(\Gamma ^\varepsilon )\bigg |_0 = \int _0^1 \langle \tau ^{i_p}, \partial _x Y^{i_p}\rangle \,\mathrm dx = \langle \tau ^{i_p}(1),Y^{i_p}(1)\rangle - \int _0^1 \langle {\varvec{k}}^{i_p}, Y^{i_p}\rangle \,\mathrm ds^{i_p}, \end{aligned} \end{aligned}$$

which takes the form given in (3.24). In case \(\gamma ^{i,\varepsilon }(p)=\gamma ^i_*(p) \) for any i at any endpoint p, then \({\textsf{N}}^{i_p}(1)=X^{i_p}(1)=0\), and (3.25) follows as well.

Considering now arbitrary variations as in (3.23), then (3.24) follows in the general case observing that the formula is linear with respect to the \(X^i\)’s and that each \(X^i\) can be written as \(X^i= \eta X^i + (1-\eta ) X^i\) in a way that \(\textrm{spt} (\eta X^i) \subset [0,\tfrac{5}{8})\) and \(\textrm{spt} ((1-\eta ) X^i )\subset (\tfrac{3}{8},1]\), recalling also that \(\partial _\varepsilon \gamma ^{i,\varepsilon }(p)=0\) at any endpoint p. Additive terms of the form \(g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) X^i (x)\) appear by changing variables in order to factor out the function \(X^i(x)\) in the i-th integral. \(\square \)

Proposition 1.25

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \( X^i, Z^i \in H^2\) such that

$$\begin{aligned} \sum _{\ell \in I_m} (-1)^{e^\ell _m} X^\ell (e^\ell _m) = \sum _{\ell \in I_m} (-1)^{e^\ell _m} Z^\ell (e^\ell _m) =0 \qquad \forall m \in J_G. \end{aligned}$$

Let \(\Gamma ^{\varepsilon ,\eta }:G\rightarrow {\mathbb {R}}^2\) be the triple junctions network defined by

$$\begin{aligned} \gamma ^{i,\varepsilon ,\eta }(x) :=\gamma ^i_*(x) + (\varepsilon X^i(x) + \eta Z^i(x))\nu ^i_*(x) + {\textsf{T}}^{i,\varepsilon , \eta }(x)\tau ^i_*(x), \end{aligned}$$

for any i, for any \(|\varepsilon |, |\eta |<\varepsilon _0\) and some \(\varepsilon _0>0\), where the \({\textsf{T}}^{i,\varepsilon ,\eta }\)’s are adapted to the \((\varepsilon X^i+\eta Z^i)\)’s, for any \(|\varepsilon |,|\eta |<\varepsilon _0\).Footnote 2

Then

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } \frac{\textrm{d}}{\textrm{d}\eta } {\textrm{L}} (\Gamma ^{\varepsilon ,\eta })\bigg |_{0,0}&= \sum _i \int _0^1 \partial _s X^i \partial _s Z^i |\partial _x \gamma ^i_*| \,\mathrm dx \\&= \sum _{p \in P_G} \partial _s X^{i_p}(1) Z^{i_p}(1) + \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \partial _s X^\ell (e^\ell _m) Z^\ell (e^\ell _m) \\&\quad - \sum _i \int _0^1 \partial ^2_s X^i \, Z^i \, |\partial _x \gamma ^i_*| \,\mathrm dx, \end{aligned} \end{aligned}$$
(3.29)

where \(\partial _s X^i= |\partial _x\gamma ^i_*|^{-1}\partial _x X^i\) and \(\partial _s Z^i= |\partial _x\gamma ^i_*|^{-1}\partial _x Z^i\) for any i.

Proof

By Definitions 3.6 and 3.3, for any i we have that

$$\begin{aligned} {\textsf{T}}^{i,\varepsilon ,\eta } = \varepsilon {\textsf{T}}^{i}_X + \eta {\textsf{T}}^{i}_Z, \end{aligned}$$

where the \({\textsf{T}}^{i}_X\)’s are adapted to the \(X^i\)’s, and the \({\textsf{T}}^{i}_Z\)’s are adapted to the \(Z^i\)’s. Denoting \(\gamma ^{i,\varepsilon }:=\gamma ^{i,\varepsilon ,0}\), we compute

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } \frac{\textrm{d}}{\textrm{d}\eta } {\textrm{L}} (\Gamma ^{\varepsilon ,\eta })\bigg |_{0,0}&= \sum _i \frac{\textrm{d}}{\textrm{d}\varepsilon } \int _0^1 \frac{\langle \partial _x \gamma ^{i,\varepsilon }, \partial _x (Z^i \nu ^i_* + {\textsf{T}}^i_Z \tau ^i_*) \rangle }{|\partial _x \gamma ^{i,\varepsilon }|} \,\mathrm dx \bigg |_0 \\&= \sum _i \int _0^1 \left\langle - \frac{\langle \partial _x \gamma ^i_*, \partial _x(X^i \nu ^i_* + {\textsf{T}}^i_X \tau ^i_*)\rangle }{|\partial _x \gamma ^i_*|^3} \partial _x \gamma ^i_* \right. \\&\left. \quad + \frac{\partial _x(X^i \nu ^i_* + {\textsf{T}}^i_X \tau ^i_*)}{|\partial _x \gamma ^i_*|}, \partial _x (Z^i \nu ^i_* + {\textsf{T}}^i_Z \tau ^i_*) \right\rangle \,\mathrm dx. \end{aligned} \end{aligned}$$

Since \(\partial _x \tau ^i_*=\partial _x \nu ^i_*=0\) as \(\Gamma _*\) is minimal, we get

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}}{\textrm{d}\varepsilon } \frac{\textrm{d}}{\textrm{d}\eta } {\textrm{L}} (\Gamma ^{\varepsilon ,\eta })\bigg |_{0,0}&= \sum _i \int _0^1 \bigg \langle -\langle \tau ^i_*, \partial _s X^i \nu ^i_* + \partial _s {\textsf{T}}^i_X \tau ^i_*\rangle \tau ^i_* + \partial _s X^i \nu ^i_* \\&\quad + \partial _s {\textsf{T}}^i_X \tau ^i_*, \partial _s Z^i \nu ^i_* + \partial _s {\textsf{T}}^i_Z \tau ^i_* \bigg \rangle |\partial _x \gamma ^i_*|\,\mathrm dx \\&=\sum _i \int _0^1 \partial _s X^i \partial _s Z^i |\partial _x \gamma ^i_*| \,\mathrm dx. \end{aligned} \end{aligned}$$

Integrating by parts, the claim follows. \(\square \)

3.3 Łojasiewicz–Simon inequalities for minimal networks

We need to set up a functional analytic framework for proving the desired Łojasiewicz–Simon inequalities.

For a fixed minimal network \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\), we denote by \(M:=\sharp J_G\) and \(P:=\sharp P_G\), and we define the Banach spaces

$$\begin{aligned} \begin{aligned} V&:=\bigg \{ {\overline{{\textsf{N}}}}:=({\textsf{N}}^1,\ldots ,{\textsf{N}}^N) \in [H^2(0,1)]^N \ :\ \sum _{\ell \in I_m} (-1)^{e^\ell _m} {\textsf{N}}^\ell (e^\ell _m) =0 \,\, \forall \, m \in J_G , \\&\qquad {\textsf{N}}^{i_p}(1)=0 \,\, \forall \, p \in P_G \bigg \}, \end{aligned} \end{aligned}$$
(3.30)

endowed with \(\Vert {\overline{{\textsf{N}}}}\Vert _V^2 :=\sum _i \Vert {\textsf{N}}^i\Vert _{H^2}^2\), and

$$\begin{aligned} Z:=W_1 \times \cdots \times W_M \times [L^2(0,1)]^N, \end{aligned}$$
(3.31)

endowed with the product norm, where

$$\begin{aligned} W_m :=\left\{ (v^\ell _m)_{\ell \in I_m} \in {\mathbb {R}}^3 \ :\ \sum _{\ell \in I_m}(-1)^{e^\ell _m} v^\ell _m=0 \right\} , \end{aligned}$$

and \(W_m\) is endowed with the Euclidean scalar product.

Observe that \(\textrm{j}:V\hookrightarrow Z\) compactly with the natural injection

$$\begin{aligned} {\overline{{\textsf{N}}}} \quad \overset{\textrm{j}}{\mapsto }\quad \left( ({\textsf{N}}^\ell (e^\ell _m)) , {\overline{{\textsf{N}}}}\right) \end{aligned}$$
(3.32)

For \(r_{\Gamma _*}>0\) small enough, we also define the energy \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) by

$$\begin{aligned} {{\textbf {L}}}({\overline{{\textsf{N}}}}) :=\sum _i {\textrm{L}} \left( \gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \right) , \end{aligned}$$
(3.33)

where the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s (see Definition 3.6). We observe that, according to Lemma 3.2, the immersions \(\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \) define a triple junctions network.

Corollary 1.26

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).

Then the following hold.

  1. 1.

    The first variation \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) is \(Z^\star \)-valued by setting

    $$\begin{aligned} \begin{aligned} \delta {{\textbf {L}}}&({\overline{{\textsf{N}}}})[((v^\ell _m), {\overline{X}})]\\&= \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \left[ \langle \tau ^\ell (e^\ell _m), \nu ^\ell _*(e^\ell _m)\rangle + \sum _{j \in I_m} h_{\ell j}\langle \tau ^j(e^j_m) , \tau ^j_*(e^j_m)\rangle \right] v^\ell _m \\&\quad - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| \\&\quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx, \end{aligned} \end{aligned}$$
    (3.34)

    where \(f_{ij}, g_{ij}, h_{\ell j} \in {\mathbb {R}}\) depend on the topology of G, and \(\tau ^i, {\varvec{k}}^i\) are referred to the immersions \(\gamma ^i :=\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i\tau ^i_*\), with \({\textsf{T}}^i\) adapted to \({\textsf{N}}^i\).

    If also, the network defined by the immersions \(\gamma ^i\) is regular, then

    $$\begin{aligned} \begin{aligned} \delta {{\textbf {L}}} ({\overline{{\textsf{N}}}})[((v^\ell _m), {\overline{X}})]&= - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| \\&\quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx, \end{aligned} \end{aligned}$$
    (3.35)
  2. 2.

    The second variation \(\delta ^2 {{\textbf {L}}}_0: V\rightarrow Z^\star \) at 0 is \(Z^\star \)-valued by setting

    $$\begin{aligned} \begin{aligned} \delta ^2 {{\textbf {L}}}_0 ( {\overline{X}} ) [((v^\ell _m), {\overline{Z}}) ]&= \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \partial _s X^\ell (e^\ell _m) v^\ell _m \\&\quad - \sum _i \int _0^1 \bigg (|\partial _x \gamma ^i_*| \partial ^2_s X^i \bigg ) Z^i \,\mathrm dx, \end{aligned} \end{aligned}$$
    (3.36)

    where \(\partial _s X^n= |\partial _x\gamma ^n_*|^{-1}\partial _x X^n\) for any n.

Proof

For the sake of precision, we maintain \(Z^\star \) and \(\textrm{j}^\star (Z^\star )\) distinct in this proof.

The first item follows by Proposition 3.7. Let \({\overline{{\textsf{N}}}},{\overline{X}}\in V\). Equation (3.24) yields the expression for \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \in V^\star \), and we notice that, since \({\overline{X}}\in V\), the sum over endpoints \(p \in P_G\) in (3.24) vanishes. Hence (3.24) shows that there exists an element \(\nabla {{\textbf {L}}}({\overline{{\textsf{N}}}})\) of Z such that \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}})[{\overline{X}}] = \langle \nabla {{\textbf {L}}}({\overline{{\textsf{N}}}}), \textrm{j}({\overline{X}})\rangle _{Z}\). Letting \({\textrm{I}}:Z\rightarrow Z^\star \) the natural isometry, this means

$$\begin{aligned} \delta {{\textbf {L}}}({\overline{{\textsf{N}}}})[{\overline{X}}] = \langle \nabla {{\textbf {L}}}({\overline{{\textsf{N}}}}), \textrm{j}({\overline{X}})\rangle _{Z} = \langle I\left( \nabla {{\textbf {L}}}({\overline{{\textsf{N}}}}) \right) ,\textrm{j}({\overline{X}})\rangle _{Z^\star ,Z} = \langle \,\textrm{j}^\star \left( I\left( \nabla {{\textbf {L}}}({\overline{{\textsf{N}}}}) \right) \right) ,{\overline{X}}\rangle _{V^\star ,V}, \end{aligned}$$

that is, \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \in \textrm{j}^\star (Z^\star )\), and (3.34) follows as well. By the same reasoning, (3.35) follows from (3.25).

The second item analogously follows from Proposition 3.8. In this case we notice that the sum over endpoint \(p \in P_G\) in (3.29) vanishes whenever \({\overline{Z}} \in V\), leading to (3.36). \(\square \)

Now we start checking that the assumptions needed to imply a Łojasiewicz–Simon inequality hold, see Proposition 3.12. We start from the analyticity of the functional and of its first variation.

Lemma 1.27

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}, r_{\Gamma _*}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).

Then the maps \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) and \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) are analytic.

Proof

The claim easily follows by recalling that multilinear continuous maps are analytic and that sum and compositions of analytic maps are analytic. Moreover if \(T_j:U\subset B\rightarrow C_j\), for \(j=1,2\), is analytic from an open set U of a Banach space B into a Banach space \(C_j\), and \(\cdot :C_1\times C_2\rightarrow D\) is a bilinear continuous map into a Banach space D, then the “product operator” \(T(v,w):=T_1(v)\cdot T_2(w)\) is analytic from U into D.

Concerning analyticity of \({{\textbf {L}}}\) we need to check that

$$\begin{aligned} B_{r_{\Gamma _*}} (0) \ni \,{\textsf{N}}\quad \mapsto \quad \int _0^1 \left| \partial _x \left( \gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \right) \right| \,\mathrm dx, \end{aligned}$$

is analytic for any i. Since the \({\textsf{T}}^i\)’s are adapted, they depend linearly on the \({\textsf{N}}^i\)’s, moreover differentiation with respect to x is linear and continuous from V to \([H^1(0,N)]^N\). Also, for \(r_{\Gamma _*}\) sufficiently small, we have that \( \left| \partial _x \left( \gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \right) \right| \ge c_*>0\), for \(c_*\) depending on \(\Gamma _*, r_{\Gamma _*}\) only. Finally integration is linear and continuous on \(L^1(0,1)\). Putting together all these observations, we get that \({\varvec{L}}\) is analytic.

The analyticity of \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) follows by completely analogous observations, recalling the expression in (3.34). Indeed, one can check that tangent and curvature vectors to an immersion \(\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_*\) depend analytically on the parametrization, and then on \({\textsf{N}}\) (see for example the analogous treatment in [13, Section 3.1, Appendix B]). Moreover, the trace operator evaluating a tangent vector \(\tau ^\ell \in H^1(0,1)\) at junction points is linear and continuous. Recalling that product operators of analytic maps are analytic, the analyticity of \(\delta {{\textbf {L}}}\) follows. \(\square \)

Now we need to prove that the second variation is Fredholm of index zero. We recall that a continuous linear operator T between Banach spaces is Fredholm of index zero if its kernel has finite dimension, its image has finite codimension, and such dimensions are equal.

Lemma 1.28

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).

Then the second variation \(\delta ^2 {{\textbf {L}}}_0: V\rightarrow Z^\star \) at 0 is a Fredholm operator of index zero.

Proof

Denote by \({\textrm{I}}:Z\rightarrow Z^\star \) the natural isometry. Recalling (3.36), we see that the claim follows as long as we can prove that the following operator is Fredholm of index 0:

$$\begin{aligned} V \ni \quad {\overline{X}} \quad \mapsto \quad {\textrm{I}}\left( ((-1)^{1+e^{\ell }_m}\partial _s X^\ell (e^\ell _m) ), -|\partial _x\gamma ^i_*| \partial ^2_s X^i \right) \quad \in Z^\star . \end{aligned}$$

Let

$$\begin{aligned} \begin{aligned} V_1&:=\bigg \{ {\overline{X}}:=(X^1,\ldots ,X^N) \in [H^1(0,1)]^N \ :\ \sum _{\ell \in I_m} (-1)^{e^\ell _m} X^\ell (e^\ell _m) =0 \,\, \forall \, m \in J_G , \\&\qquad X^{i_p}(1)=0 \,\, \forall \, p \in P_G \bigg \}, \end{aligned} \end{aligned}$$

and let \(( (V^\ell _m), {\overline{Z}}) \in Z\) be fixed. We consider the operator \(F: V_1 \rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} F({\overline{Y}}) :=\sum _{m \in J_G} \sum _{\ell \in I_m} V^\ell _m Y^\ell (e^\ell _m) + \sum _i \int _0^1 |\partial _x \gamma ^i_* |Z^i Y^i \,\mathrm dx. \end{aligned}$$

We can endow \(V_1\) with the scalar product \(\langle {\overline{X}},{\overline{Y}}\rangle :=\sum _i \int _0^1 \partial _s X^i \partial _s Y^i + X^iY^i \,\mathrm ds\), where \(\textrm{d}s=\textrm{d}s_{\gamma ^i_*}\) along the ith edge. Hence \(F:(V_1,\langle \cdot ,\cdot \rangle )\rightarrow {\mathbb {R}}\) is linear and continuous, and then there exists a unique \({\overline{X}} \in V_1\) such that

$$\begin{aligned} \sum _i \int _0^1 \partial _s X^i \partial _s Y^i + X^iY^i \,\mathrm ds = \sum _{m \in J_G} \sum _{\ell \in I_m} V^\ell _m Y^\ell (e^\ell _m) + \sum _i \int _0^1 Z^i Y^i \,\mathrm ds, \end{aligned}$$
(3.37)

for any \({\overline{Y}} \in V_1\). Testing on \({\overline{Y}} \in V_1\) such that \(Y^i\equiv 0\) for all i except for a fixed index j, and \(Y^j \in C^1_c(0,1)\), we see that

$$\begin{aligned} \int _0^1 \partial _s X^j\partial _s Y^j + X^jY^j \,\mathrm ds = \int _0^1 Z^j Y^j \,\mathrm ds, \end{aligned}$$

which implies that \(X^j \in H^2(0,1)\) with \(-\partial ^2_s X^j + X^j = Z^j\), and thus \({\overline{X}}\) belongs to V.

For \(m\in J_G\), we can now take \({\overline{Y}} \in V_1\) with \(Y^\ell \equiv 0\) for all \(\ell \) except for \(\ell \in I_m\), with \(Y^\ell \in C^1\) vanishing at the endpoint of \(E^\ell \) different from the junction m. Integration by parts in (3.37) then gives

$$\begin{aligned} \sum _{\ell \in I_m} (-1)^{1+ e^\ell _m} \partial _s X^\ell (e^\ell _m) Y^\ell (e^\ell _m) = \sum _{\ell \in I_m} V^\ell _m Y^\ell (e^\ell _m). \end{aligned}$$

Arbitrariness of \({\overline{Y}}\) implies that \(\sum _{\ell \in I_m} \left( (-1)^{1+ e^\ell _m} \partial _s X^\ell (e^\ell _m) - V^\ell _m \right) v^\ell =0\) for any triple \(\{ v^\ell \ :\ \ell \in I_m, \, \sum _{\ell \in I_m} (-1)^{e^\ell _m} v^\ell =0 \}\). This means that there exists a constant \(\alpha _m \in {\mathbb {R}}\) such that

$$\begin{aligned} (-1)^{1+ e^\ell _m} \partial _s X^\ell (e^\ell _m) - V^\ell _m = \alpha _m (-1)^{e^\ell _m} \qquad \forall \ell \in I_m. \end{aligned}$$

Multiplying by \((-1)^{e^\ell _m}\) and summing over \(\ell \) implies that \(3 \alpha _m = - \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m)\), and then

$$\begin{aligned} (-1)^{1+e^{\ell }_m}\left( \partial _s X^\ell (e^\ell _m) - \frac{1}{3} \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m) \right) = V^\ell _m \qquad \forall \, \ell \in I_m. \end{aligned}$$

Therefore, we have proved that for arbitrary \(( (V^\ell _m), {\overline{Z}}) \in Z\) there exists a unique \({\overline{X}} \in V\) satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial ^2_s X^i + X^i = Z^i &{}\quad \forall \, i , \\ (-1)^{1+e^{\ell }_m}\left( \partial _s X^\ell (e^\ell _m) - \frac{1}{3} \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m) \right) = V^\ell _m &{}\quad \forall \, m \in J_G, \, \ell \in I_m. \end{array}\right. } \end{aligned}$$

Therefore, if we further define the linear and continuous operator \({\mathscr {F}}: V \rightarrow Z^\star \) given by

$$\begin{aligned} {\mathscr {F}}({\overline{X}}) :={\textrm{I}} \left( \left( (-1)^{1+e^{\ell }_m}\left( \partial _s X^\ell (e^\ell _m) - \frac{1}{3} \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m) \right) \right) , -|\partial _x\gamma ^i_*| \partial ^2_s X^i + |\partial _x\gamma ^i_*| X^i\right) , \end{aligned}$$

where \({\textrm{I}}:Z\rightarrow Z^\star \) is the natural isometry, we see that \({\mathscr {F}}\) is invertible, and thus it is Fredholm of index 0.

Recall that Fredholmness is stable under compact perturbations: a linear operator T between Banach spaces is Fredholm of index l if and only if \(T+K\) is Fredholm of index l, for any compact operator K (see [28, Section 19.1]). Therefore, since

$$\begin{aligned} V \ni \quad {\overline{X}} \quad \mapsto \quad {\textrm{I}}\left( \left( -(-1)^{1+e^{\ell }_m}\frac{1}{3} \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m) \right) ,|\partial _x\gamma ^i_*| X^i \right) \quad \in Z^\star , \end{aligned}$$

is compact, we conclude that

$$\begin{aligned} V \ni \quad {\overline{X}} \quad \mapsto \quad {\textrm{I}}\left( ((-1)^{1+e^{\ell }_m}\partial _s X^\ell (e^\ell _m) ), -|\partial _x\gamma ^i_*| \partial ^2_s X^i \right) \quad \in Z^\star , \end{aligned}$$

is Fredholm of index 0 as well, completing the proof. \(\square \)

We can now apply the following abstract result stating sufficient conditions implying a Łojasiewicz–Simon gradient inequality.

Proposition 1.29

([52, Corollary 2.6]) Let \(E:B_{\rho _0}(0)\subseteq V \rightarrow {\mathbb {R}}\) be an analytic map, where V is a Banach space. Suppose that 0 is a critical point for E, i.e., \(\delta E_0 = 0\). Assume that there exists a Banach space Z such that \(V\hookrightarrow Z\), the first variation \(\delta E: B_{\rho _0}(0)\rightarrow Z^\star \) is \(Z^\star \)-valued and analytic and the second variation \(\delta ^2 E_0: V \rightarrow Z^\star \) evaluated at 0 is \(Z^\star \)-valued and Fredholm of index zero.

Then there exist constants \(C,\rho _1>0\) and \(\theta \in (0,1/2]\) such that

$$\begin{aligned} |E(v)- E(0)|^{1-\theta } \le C \Vert \delta E_v \Vert _{Z^\star }, \end{aligned}$$

for every \(v \in B_{\rho _1}(0) \subseteq V\).

The above functional analytic result is a corollary of the useful theory developed in [9] and it has been independently observed in [54].

Theorem 1.30

(Łojasiewicz–Simon inequality at minimal networks) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let VZ be as in (3.30), (3.31), and define \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) as in (3.33).

Then there exist \(C_{\textrm{LS}}>0\), \(\theta \in (0,\tfrac{1}{2}]\), and \(r \in (0,r_{\Gamma _*}]\) such that

$$\begin{aligned} \left| \textbf{L}({\overline{{\textsf{N}}}}) - {\textrm{L}}(\Gamma _*) \right| ^{1-\theta } \le C_{\textrm{LS}} \left\| \delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \right\| _{Z^\star }, \end{aligned}$$

for any \({\overline{{\textsf{N}}}} \in B_r(0)\subset V\).

Proof

The proof immediately follows by applying Proposition 3.12 recalling Lemmas 3.10 and 3.11. \(\square \)

We can finally derive the following more explicit Łojasiewicz–Simon inequality for regular networks.

Corollary 1.31

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exist \(C_{\textrm{LS}},\sigma >0\) and \(\theta \in (0,\tfrac{1}{2}]\) such that the following holds.

If \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network of class \(H^2\) such that

$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \sigma ,&\end{aligned}$$
(3.38)
$$\begin{aligned} \gamma ^i_*(p)=\gamma ^i(p)&\qquad \forall \,p\in G\ :\ p\text { is an endpoint,} \end{aligned}$$
(3.39)

then

$$\begin{aligned} \left| {\textrm{L}}(\Gamma )-{\textrm{L}}(\Gamma _*) \right| ^{1-\theta } \le C_{\textrm{LS}} \left( \sum _i \int _0^1 |{\varvec{k}}^i|^2 \,\mathrm ds \right) ^{\frac{1}{2}}. \end{aligned}$$
(3.40)

Proof

For \(\sigma \) small enough, applying Proposition 3.4 and recalling (3.13), we know that there exist functions \({\textsf{N}}^i, {\textsf{T}}^i \in H^2(\textrm{d}x)\), where \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s, and reparametrizations \(\varphi ^i:[0,1]\rightarrow [0,1]\) such that

$$\begin{aligned} \gamma ^i\circ \varphi ^i(x) = \gamma ^i_*(x) + {\textsf{N}}^i(x)\nu ^i_*(x) + {\textsf{T}}^i(x)\tau ^i_*(x) =:{\widetilde{\gamma }}^i. \end{aligned}$$

Moreover, by (3.39), Lemma 3.1, and up to decreasing \(\sigma \), we have that \({\overline{{\textsf{N}}}}:=({\textsf{N}}^1,\ldots ,{\textsf{N}}^N)\) belongs to the ball \(B_r(0)\subset V\), where rV are as in Theorem 3.13.

For \({{\textbf {L}}}, Z\) as in Theorem 3.13, since \(\Gamma \) is regular, by (3.35) we get that

$$\begin{aligned} \begin{aligned} \Vert \delta {{\textbf {L}}}({\overline{{\textsf{N}}}})\Vert _{Z^\star }^2&=\sum _i \int _0^1 \bigg | \langle \widetilde{{\varvec{k}}}^i,\nu ^i_*\rangle |\partial _x {{\widetilde{\gamma }}}^i| + \sum _j f_{ij}\chi \langle \widetilde{{\varvec{k}}}^j,\tau ^j_*\rangle |\partial _x {{\widetilde{\gamma }}}^j| \\&\quad + g_{ij}\chi (1-x) \langle \widetilde{{\varvec{k}}}^j,\tau ^j_*\rangle (1-x)|\partial _x {{\widetilde{\gamma }}}^j|(1-x) \bigg ) \bigg |^2 \,\mathrm dx \\&\le C(\Gamma _*,\sigma ) \sum _i \int _0^1 |\widetilde{{\varvec{k}}}^i|^2 \,\mathrm ds . \end{aligned} \end{aligned}$$

Since \({\textrm{L}}(\Gamma )={{\textbf {L}}}({\overline{{\textsf{N}}}})\) and the \(L^2(\textrm{d}s)\) norm of the curvature on the right hand side of (3.3) does not depend on the parametrization, the above estimate together with Theorem 3.13 imply (3.40). \(\square \)

Remark 3.15

(Further Łojasiewicz–Simon inequalities at minimal networks) By an adaptation of the above arguments, we expect to be possible to prove a Łojasiewicz–Simon inequality at minimal networks taking into account also variations at endpoints.

More precisely, removing the constraint \({\textsf{N}}^{i_p}(1)=0\) for \({\overline{{\textsf{N}}}} \in V\) in (3.30), and considering \({\widetilde{Z}}:={\mathbb {R}}^P \times Z\), for Z as in (3.31), employing the variation formulae in Propositions 3.7 and 3.8, one can consider triple-junctions networks \(\Gamma \) in a neighborhood of a minimal one \(\Gamma _*\) having endpoints different from those of \(\Gamma _*\).

Arguing as in the above propositions, one eventually deduces an analog of Theorem 3.13. The resulting statement would formally read exactly as Theorem 3.13, but in this case the norm \(\left\| \delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \right\| _{Z^\star }\) on the right hand side of the inequality also counts contributions from the varied endpoints. More precisely, all the terms in the first variation formula (3.25) representing the operator \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}})\) do not vanish in general and thus contribute to its norm.

4 Minimal networks locally minimize length

In this section we provide a simple proof of the fact that minimal networks are automatically local minimizers for the length with respect to perturbations sufficiently small in \(C^0\).

More precisely, we say that a regular network \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) locally minimizes the length in \(C^0\) if there exists \(\eta >0\) such that \({\textrm{L}}(\Gamma ) \ge {\textrm{L}}(\Gamma _*)\) whenever \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network having the same endpoints of \(\Gamma _*\) and such that \(\Vert \gamma ^i\circ \sigma ^i-\gamma ^i_*\Vert _{C^0} < \eta \), for some reparametrizations \(\sigma ^i\).

We mention that more general minimality properties of minimal networks can be proved, see [24, 47, 48, 51, 60].

Lemma 1.33

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then \(\Gamma _*\) locally minimizes the length in \(C^0\).

Proof

For any \(r>0\) and for any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) of G, let \(T_{r,m}\) be the closed equilateral triangle having \(\Gamma _*(m)\) as barycenter and whose sides have length r and are orthogonal to the inner tangent vectors at m, that is, the vectors \((-1)^{e^i}\tau ^i(e^i)\), \((-1)^{e^j}\tau ^j(e^j)\), \((-1)^{e^k}\tau ^k(e^k)\).

Now fix \(r>0\) small enough such that the set \(T_{r,m} \cap \Gamma _*(G)\) is a standard triod for any junction m, i.e., such set is given by the union of three straight segments of the same length having one end in common forming angles equal to \(\tfrac{2}{3}\pi \) (see Fig. 2).

Let \(\Gamma :G\rightarrow {\mathbb {R}}^2\) be a smooth regular network with the same endpoints of \(\Gamma _*\). If, up to reparametrization, the immersions defining \(\Gamma \) are close to the ones of \(\Gamma _*\) in \(C^0\), then for any edge \(E_i\) if, say, \(m=\pi (0,i)\) and \(m'=\pi (1,i)\) are two junctions, we can fix times \(0<t_{i,1}<t_{i,2}<1\) such that \(t_{i,1}\) is the last time \(\gamma ^i\) intersects \(\partial T_{r,m}\) and \(t_{i,2}\) is the first time \(\gamma ^i\) intersects \(\partial T_{r,m'}\). Such intersections define points close to \((\partial T_{r,m} ) \cap \Gamma _*(G)\) and \((\partial T_{r,m'} ) \cap \Gamma _*(G)\). In case \(\pi (0,i)\) is an endpoint, we set \(t_{i,1}=0\).

In order to complete the proof, if, say, \(m=\pi (0,i)=\pi (0,j)=\pi (0,k)\) is a junction, it is sufficient to prove that the length of \(\Gamma _*\) in \(T_{r,m}\) is smaller than the sum \(\sum _{\ell =i,j,k} {\textrm{L}}(\gamma ^\ell |_{(0,t_{\ell ,1})})\). Indeed, \(\Gamma _*(G)\setminus \cup _m T_{r,m}\) is given by straight segments orthogonal to the sides of the triangles \(T_{r,m}\) and whose endpoints lay either on parallel sides of different triangles \(T_{r,m'}, T_{r,m''}\), or on a side of a triangle \(T_{r,m}\) and on an endpoint \(\Gamma _*(p)\) of the network. Hence the length of \(\Gamma _*\) outside \( \cup _m T_{r,m}\) is automatically smaller than the sum of the lengths of the curves of \(\Gamma \) on intervals \((t_{1,i},t_{2,i})\).

Eventually, the argument reduces to prove that the length of a standard triod \({\mathbb {T}}\) whose endpoints are the mid points of the sides of an equilateral triangle is the least possible among the length of topological triods having endpoints on the sides of the same triangle close to the ones of \({\mathbb {T}}\) (see Fig. 2). Up to scaling and translation, let us assume that the endpoints of a standard triod are located at points \((-1,0), (1,0), (0,\sqrt{3})\) in the plane. Hence the endpoints of a competitor triod take the form \(A=(-1,0)+s(-\tfrac{1}{2},\tfrac{\sqrt{3}}{2})\), \(B=(1,0)+t(\tfrac{1}{2},\tfrac{\sqrt{3}}{2})\), \(C=(x,\sqrt{3})\) for stx close to zero (see Fig. 2). The length of the competitor triod in greater or equal than the one of the Steiner tree joining ABC, which is another topological triod \({\mathbb {S}}\) whose total length can be shown to be equal to the length of the segment CT, where T is the point (farthest from C) such that points ABT are vertices of an equilateral triangle (see Fig. 2 and [50]). In the end, the proof follows if we prove that \({\textrm{L}}({\mathbb {T}}) \le {\textrm{L}}(CT)\).

In our choice of coordinates we have that \({\textrm{L}}(\mathbb T)=2\sqrt{3}\). On the other hand we have that \(T=A + \textrm{R}(B-A)\), where \(\textrm{R}\) is the clockwise rotation of an angle equal to \(\tfrac{\pi }{3}\). Hence

$$\begin{aligned} \begin{aligned} T=A+ \frac{1}{2}\begin{pmatrix} 1 &{}\quad \sqrt{3} \\ -\sqrt{3} &{}\quad 1 \end{pmatrix}(B-A) = (t-s, -\sqrt{3}). \end{aligned} \end{aligned}$$

Then \({\textrm{L}}(CT)^2 = (t-s-x)^2+(-\sqrt{3}-\sqrt{3})^2 \ge (2\sqrt{3})^2 = {\textrm{L}}({\mathbb {T}})^2\), which completes the proof. \(\square \)

Fig. 2
figure 2

Standard triod joining mid points on the sides of an equilateral triangle (dashed lines). Dotted lines: equilateral triangle constructed over the side AB. For other endpoints ABC close to such midpoints, the length of the Steiner tree joining ABC is equal to the length of CT

5 Stability and convergence

In this section we prove our main stability theorem. First we need the next technical lemma, which is based on a simple contradiction argument implying that the motion by curvature starting sufficiently close to a minimal network \(\Gamma _*\) in \(H^2\) passes as close as prescribed to \(\Gamma _*\) in \(C^k\) at some positive time.

Lemma 1.34

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Hence, for any \(\eta >0\) and \(k \in {\mathbb {N}}\) there exists \({\overline{\varepsilon }}={\overline{\varepsilon }}(\Gamma _*,\eta ,k)>0\) such that the following holds.

For any smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2} < {\overline{\varepsilon }}\), the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\), for \(t \in [0,T)\), starting from \(\Gamma _0\) satisfies

$$\begin{aligned} \Vert \gamma ^i_\tau \circ \sigma ^i-\gamma ^i_*\Vert _{C^k} < \eta , \end{aligned}$$
(5.1)

for some \(\tau \in (0,T)\) and smooth reparametrizations \(\sigma ^i\), for any i.

Proof

Suppose by contradiction that there are \(\eta >0,k \in {\mathbb {N}}\) and a sequence of smooth regular networks \(\Gamma _{n,0}:G\rightarrow {\mathbb {R}}^2\) such that \(\Vert \gamma ^i_{n,0}-\gamma ^i_*\Vert _{H^2} < 1/n\), but the motions by curvature \(\Gamma _{n,t}:G\rightarrow {\mathbb {R}}^2\), defined on maximal intervals \([0,T_n)\) and starting from \(\Gamma _{n,0}\), satisfy

$$\begin{aligned} \Vert \gamma ^i_{n,t}\circ \sigma ^i_t-\gamma ^i_*\Vert _{C^k} \ge \eta _0, \end{aligned}$$
(5.2)

for any \(t \in (0,T_n)\) and any reparametrizations \(\sigma ^i_t\), where \(\sigma ^i_t\) is smooth with respect to x.

By Theorem 2.10, since \(\Vert \gamma ^i_{n,0}-\gamma ^i_*\Vert _{H^2} \rightarrow 0\) for any i as \(n\rightarrow \infty \), there exists \(T>0\) such that \(T_n> 2 T\) for any n. Moreover the solutions \({\mathcal {N}}_n\) of the motion by curvature starting from \(\Gamma _{n,0}\) satisfy a uniform bound \(\Vert {\mathcal {N}}_n \Vert _{W^{1,2}_5}\le M=M(\Gamma _*)\). By the compact embedding \(W^{1,2}_5\hookrightarrow W^{1,2}_4\), it follows that, up to subsequence, the solutions \(\gamma ^i_{n,t}\) converge in \(W^{1,2}_4\left( (0,T)\times (0,1);{\mathbb {R}}^2\right) \) to limit immersions \(\gamma ^i_{\infty ,t}\). Moreover, \(\gamma ^i_{n,0}\rightarrow \gamma ^i_{\infty ,0}=\gamma ^i_*\) in \(H^2\) and passing to the limit at almost every tx in

$$\begin{aligned} \langle \partial _t \gamma ^i_{n,t}, \nu ^i_{n,t}\rangle \nu ^i_{n,t} = {\varvec{k}}^i_{n,t}, \end{aligned}$$

we deduce that the maps \(\gamma ^i_{\infty ,t}\) give a solution to the motion by curvature starting from \(\Gamma _*\). Since \(\Gamma _*\) is minimal, then \(\gamma ^i_{\infty ,t}\) actually coincides with \(\gamma ^i_*\) up to reparametrization.

From the uniform bound in \(W^{1,2}_5\), we can fix \(s\in (0,T)\) such that \(\gamma ^i_{n,s}\rightarrow \gamma ^i_{\infty ,s}\) in \(H^2\) for any i. Hence the \(L^2(\textrm{d}s)\)-norm of the curvature of \(\gamma ^i_{n,s}\) is bounded from above and the length \({\textrm{L}}(\gamma ^i_{n,s})\) is bounded from below away from zero, independently of n. Recalling from Theorem 2.10 and Remark 2.12 that for positive times the flow is smooth and it evolves according to \(\partial _t\gamma ^i_{n,t} = \partial ^2_x\gamma ^i_{n,t}/|\partial _x\gamma ^i_{n,t}|^2\), we can apply the regularity estimates from [43, Proposition 5.10, Proposition 5.8] considering \(\Gamma _{n,s}\) as a new initial datum. This implies that there are \(s<T_1\le T\) and \(C_m>0\), for any \(m \in {\mathbb {N}}\), independent of n such that \(\Vert {\varvec{k}}^i_{n,t}\Vert _{H^m}(\textrm{d}s)\le C_m\) for any \(t \in [s,T_1]\).

Therefore the sequence of flows \(\Gamma _{n,t}\) converges smoothly on \([s,T_1]\times G\), up to reparametrizations, to the motion by curvature \({{\widehat{\Gamma }}}_{\infty ,t}\) parametrized by \({{\widehat{\gamma }}}^i_{\infty ,t}\), and \({{\widehat{\gamma }}}^i_{\infty ,t}\) is a reparametrization of \(\gamma ^i_*\). As the convergence holds in \(H^m\) for any \(m \in {\mathbb {N}}\), we find a contradiction with (5.2) at any \(t \in [s,T_1]\) for large n. \(\square \)

Theorem 1.35

Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exists \(\delta _{\Gamma _*}>0\) such that the following holds.

Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network such that \(\gamma ^i_*(p)=\gamma ^i(p)\) for any endpoint \(,p\in G\) and such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2(\textrm{d}x)}\le \delta _{\Gamma _*}\). Then the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) starting from \(\Gamma _0\) exists for all times and it smoothly converges to a minimal network \(\Gamma _\infty \) such that \({\textrm{L}}(\Gamma _\infty )={\textrm{L}}(\Gamma _*)\), up to reparametrization.

Proof

We recall the following interpolation inequalities. For any \(k \in {\mathbb {N}}\) with \(k\ge 1\) there exist \(\lambda _k>0,\zeta _k\in (0,1)\) such that

$$\begin{aligned} \Vert u \Vert _{H^k(\textrm{d}x)} \le \lambda _k \Vert u \Vert _{L^2}^{\zeta _k} \Vert u \Vert _{H^{k+1}}^{1-\zeta _k}, \end{aligned}$$
(5.3)

for any \(u \in H^{k+1}\left( (0,1);{\mathbb {R}}^N\right) \). We shall drop the subscript k when \(k=2\).

Let \(\sigma ,\theta ,r,C_{\textrm{LS}}\) be given by Theorem 3.13 and Corollary 3.14, where \(C_{\textrm{LS}}\) is the maximum of the constants given by both the statements.

Recalling Lemma 4.1, up to decrease \(r>0\), we can assume that the following hold. Whenever \({{\widehat{\gamma }}}^i:=\gamma ^i_*+ {\textsf{N}}^i\nu ^i_*+{\textsf{T}}^i\tau ^i_*\) is a smooth regular network, for \({\overline{{\textsf{N}}}}\in B_r(0)\subset V\) in the notation of Theorem 3.13, where the \({\textsf{T}}^i\)’s are adapted, then

  1. (1)

    there exists a constant \(C_G>2\), depending only on the graph G and \(\Gamma _*\), such that

    $$\begin{aligned} \begin{aligned}&\langle {{\widehat{\nu }}}^i , \nu ^i_*\rangle \ge \frac{3}{4} , \qquad \qquad |\langle {{\widehat{\nu }}}^i , \tau ^i_*\rangle | < \frac{1}{C_G}, \\&\sum _{m \in J_G}\sum _{\ell \in I_m} \left| a^\ell (x) \langle {{\widehat{\nu }}}^\ell , \nu ^\ell _*\rangle + \langle {{\widehat{\nu }}}^\ell , \tau ^\ell _*\rangle \chi (x){\mathscr {L}}^\ell _m (a^{i_m}(x),a^{j_m}(x) ) \right| ^2 \ge \frac{2}{C_G}\sum _i |a^i(x)|^2 , \end{aligned} \end{aligned}$$

    where \({{\widehat{\nu }}}^i\) is the normal vector of \({{\widehat{\gamma }}}^i\), and \(i_m\) (resp. \(j_m\)) denotes the minimal (resp. intermediate) element of \(I_m\), for any continuous functions \(a^1,\ldots ,a^N\);

  2. (2)

    there exist \(c_1,c_2>0\) such that

    $$\begin{aligned} c_1\le |\partial _x {{\widehat{\gamma }}}^i|^{-1} \le c_2, \end{aligned}$$

    for any i;

  3. (3)

    there is \(C_G'>2\) such that

    • if \(\Xi \) is a smooth regular network having the same endpoints of \(\Gamma _*\) defined by immersions \(\xi ^i\) such that \(\Vert \xi ^i-\gamma ^i_*\Vert _{H^2}< C_G' r\), then \({\textrm{L}}(\Xi )\ge {\textrm{L}}(\Gamma _*)\);

    • \(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{H^2}< \min \{(C_G'-1) r, \sigma /2 \}\).

We claim that whenever \({{\widehat{\gamma }}}^i_t=\gamma ^i_*+ {\textsf{N}}^i_t\nu ^i_*+{\textsf{T}}^i_t\tau ^i_*\) is a smooth solution to the motion by curvature, for \({\overline{{\textsf{N}}}}_t\in B_r(0)\subset V\) for any t, where we used the notation of Theorem 3.13 and the \({\textsf{T}}_t^i\)’s are adapted, then for any \(m \in {\mathbb {N}}\) with \(m\ge 3\) there exists \(C_m=C_m(r, \Gamma _*)>0\) such that

$$\begin{aligned} \left\| {\overline{{\textsf{N}}}}_t \right\| _{H^m(\textrm{d}x)} \le C_m, \end{aligned}$$
(5.4)

for any t. The claim easily follows by combining the fact that \({\overline{{\textsf{N}}}}_t\in B_r(0)\) ensures a uniform \(C^1\)-bound on the parametrizations with the fact that uniform upper bounds on the \(L^2(\textrm{d}s)\)-norm of the curvature along a motion by curvature imply uniform \(L^2(\textrm{d}s)\)-bounds on every derivative of the immersion. The proof of (5.4) is postponed to the end of the proof.

Taking into account Lemma 4.1 and Corollary 3.5, we can fix \(\eta >0\) such that:

  1. (i)

    if immersions \({{\widehat{\gamma }}}^i\) define a regular network \({{\widehat{\Gamma }}}\) with same endpoints of \(\Gamma _*\) such that \(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{C^0} \le 2\eta \), then \({\textrm{L}}(\Gamma _*)\le {\textrm{L}}({{\widehat{\Gamma }}})\);

  2. (ii)

    if \({{\widehat{\gamma }}}^i_t\) define a one-parameter family of immersions satisfying the assumptions of Corollary 3.5 and \(\sum _i \Vert {{\widehat{\gamma }}}^i_t - \gamma ^i_t \Vert _{C^5} \le \eta \) for any t around some \(t_0\), then the resulting \({\textsf{N}}^i_t\) verify \(\sum _i \Vert {\textsf{N}}^i_t\Vert _{H^4(\textrm{d}x)}< \tfrac{r}{2}\) for any t around \(t_0\);

  3. (iii)

    if immersions \({{\widehat{\gamma }}}^i\) define a network \({{\widehat{\Gamma }}}\) such that \(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{C^1} \le \eta \), then

    $$\begin{aligned} \left| {\textrm{L}}({{\widehat{\Gamma }}}) - {\textrm{L}}(\Gamma _*) \right| ^\theta \le \frac{\theta r^{\frac{1}{\zeta }} }{C_{\textrm{LS}} \sqrt{c_2C_G} \left( 100\, \lambda C_3\right) ^{\frac{1}{\zeta }}}. \end{aligned}$$

With the above choices, we want to show that the statement follows by choosing

$$\begin{aligned} \delta _{\Gamma _*}:={\overline{\varepsilon }}\bigg (\Gamma _*,\frac{\eta }{N},5\bigg ), \end{aligned}$$

where \({\overline{\varepsilon }}\) is given by Lemma 5.1.

So let \(\Gamma _0\) be as in the statement. By Lemma 5.1, the flow \(\Gamma _t\) starting from \(\Gamma _0\) satisfies

$$\begin{aligned} \sum _i\Vert \gamma ^i_\tau \circ \sigma ^i-\gamma ^i_*\Vert _{C^5} < \eta , \end{aligned}$$
(5.5)

for some \(\tau \in [0,T)\) and smooth reparametrizations \(\sigma ^i\). Then by i) we have \({\textrm{L}}(\Gamma _\tau )\ge {\textrm{L}}(\Gamma _*)\). Moreover, if \({\textrm{L}}(\Gamma _\tau )= {\textrm{L}}(\Gamma _*)\), then i) implies that \(\Gamma _\tau \) is a local minimizer for the length in \(C^0\), and thus it is minimal up to reparametrization, and the resulting flow is stationary. Hence we can assume that \({\textrm{L}}(\Gamma _\tau )> {\textrm{L}}(\Gamma _*)\).

Moreover, by Corollary 3.5 and ii) we get the existence of \({\textsf{N}}^i_t,{\textsf{T}}^i_t,\varphi ^i_t\) as in Corollary 3.5 such that

$$\begin{aligned} \gamma ^i_t\circ \sigma ^i \circ \varphi ^i_t = \gamma ^i_* + {\textsf{N}}^i_t \nu ^i_* + {\textsf{T}}^i_t \tau ^i_* =:{{\widetilde{\gamma }}}^i_t, \end{aligned}$$
(5.6)

with

$$\begin{aligned} \sum _i \Vert {\textsf{N}}^i_t\Vert _{H^4(\textrm{d}x)} < \frac{r}{2}, \end{aligned}$$

for any \(t \in [\tau ,\tau _1)\) with \(\tau _1>\tau \).

We define the nonincreasing function

$$\begin{aligned} H(t):=({\textrm{L}}(\Gamma _t)- {\textrm{L}}(\Gamma _*) )^\theta , \end{aligned}$$
(5.7)

for \(t \in [0,T)\).

Let us further define S the supremum of all \(s \in [\tau ,T)\) such that \(\gamma ^i_t\) can be written as in (5.6) for some reparametrizations \(\varphi _t\) and functions \({\textsf{N}}^i_t\) continuously differentiable in time with \(\sum _i \Vert {\textsf{N}}^i_t\Vert _{H^2(\textrm{d}x)} < r\) for any \(t \in [\tau ,s]\).

We have that \(S\ge \tau _1>\tau \). Moreover, we can assume that \({\textrm{L}}(\Gamma _s)>{\textrm{L}}(\Gamma _*)\) for any \(s \in [\tau ,S)\). Indeed, if instead \({\textrm{L}}(\Gamma _s)={\textrm{L}}(\Gamma _*)\) for some s, then \(\Gamma _s\) locally minimizes the length in \(H^2\): if immersions \({{\bar{\gamma }}}^i\) define a smooth regular network with \(\Vert {{\bar{\gamma }}}^i-{{\widetilde{\gamma }}}^i_s\Vert _{H^2}< r\), then \(\Vert {{\bar{\gamma }}}^i-\gamma ^i_*\Vert _{H^2} \le \Vert {{\bar{\gamma }}}^i-{{\widetilde{\gamma }}}^i_s\Vert _{H^2} + \Vert {{\widetilde{\gamma }}}^i_s-\gamma ^i_*\Vert _{H^2} < C_G'r\) by (3), and then \({\textrm{L}}(\Gamma _s)={\textrm{L}}(\Gamma _*) \le {\textrm{L}}({{\bar{\Gamma }}})\) by (3). Hence in this case \(\Gamma _s\) is minimal, up to reparametrization, and the resulting flow is stationary.

Therefore we can assume \(H(t)>0\) for \(t \in (\tau ,S)\), and then H is differentiable on \((\tau ,S)\). We now want to show that \(S=T=+\infty \).

We differentiate

$$\begin{aligned} \begin{aligned} -\frac{\textrm{d}}{\textrm{d}t} H&= \theta H^{\frac{\theta -1}{\theta }} \sum _i \int _0^1 |{\varvec{k}}^i_t|^2 \,\mathrm ds = \theta H^{\frac{\theta -1}{\theta }} \left( \sum _i \int _0^1 |{\varvec{k}}^i_t|^2 \,\mathrm ds \right) ^{\frac{1}{2}} \left\| (\partial _t\Gamma _t)^\perp \right\| _{L^2(\textrm{d}s)} \\&\ge \frac{\theta }{C_{\textrm{LS}}} \left\| (\partial _t\Gamma _t)^\perp \right\| _{L^2(\textrm{d}s)}, \end{aligned} \end{aligned}$$

for any \(t \in (\tau ,S)\), where we denoted \( \left\| (\partial _t\Gamma _t)^\perp \right\| _{L^2(\textrm{d}s)}^2 :=\sum _i \int |(\partial _t\gamma ^i_t)^\perp |^2 \,\mathrm ds = \sum _i \int |(\partial _t{{\widetilde{\gamma }}}^i_t)^\perp |^2 \,\mathrm ds\), where we could apply the Łojasiewicz–Simon inequality in Corollary 3.14 thanks to (3). From the above estimate we get

$$\begin{aligned} \begin{aligned} -\frac{\textrm{d}}{\textrm{d}t} H&\ge \frac{\theta }{C_{\textrm{LS}}}\left( \sum _i \int |(\partial _t{{\widetilde{\gamma }}}^i_t)^\perp |^2 \,\mathrm ds \right) ^{\frac{1}{2}} \\&= \frac{\theta }{C_{\textrm{LS}}}\left( \sum _i \int |\partial _t {\textsf{N}}^i_t \langle \nu ^i_t,\nu ^i_*\rangle + \partial _t{\textsf{T}}^i_t \langle \nu ^i_t,\tau ^i_*\rangle |^2 \,\mathrm ds \right) ^{\frac{1}{2}} \\&\ge \frac{\theta }{C_{\textrm{LS}}}\left( \frac{1}{2}\sum _{m \in J_G} \sum _{\ell \in I_m} \int |\partial _t {\textsf{N}}^\ell _t \langle \nu ^\ell _t,\nu ^\ell _*\rangle + \partial _t{\textsf{T}}^\ell _t \langle \nu ^\ell _t,\tau ^\ell _*\rangle |^2 \,\mathrm ds \right) ^{\frac{1}{2}} \\&\overset{(1)}{\ge }\ \frac{\theta }{C_{\textrm{LS}}\sqrt{C_G}} \left( \sum _i \int |\partial _t {\textsf{N}}^i_t|^2 \,\mathrm ds \right) ^{\frac{1}{2}} \\&\overset{(2)}{\ge }\ \frac{\theta }{C_{\textrm{LS}}\sqrt{C_G\, c_2}} \left( \sum _i \int |\partial _t {\textsf{N}}^i_t|^2 \,\mathrm dx \right) ^{\frac{1}{2}}, \end{aligned} \end{aligned}$$

for any \(t \in (\tau ,S)\). Hence

$$\begin{aligned} \left\| {\overline{{\textsf{N}}}}_s - {\overline{{\textsf{N}}}}_\tau \right\| _{L^2(\textrm{d}x)}&= \left\| \int _\tau ^s \partial _t {\overline{{\textsf{N}}}}_t \,\mathrm dt \right\| _{L^2(\textrm{d}x)} \le \int _\tau ^s \left\| \partial _t {\overline{{\textsf{N}}}}_t \right\| _{L^2(\textrm{d}x)} \,\mathrm dt \nonumber \\&= \int _\tau ^s \left( \sum _i \int _0^1 |\partial _t {\textsf{N}}^i_t|^2 \,\mathrm dx \right) ^{\frac{1}{2}} \,\mathrm dt\nonumber \\&\le \frac{C_{\textrm{LS}}\sqrt{C_G\, c_2}}{\theta } \left( H(\tau )-H(s) \right) \nonumber \\&\le \frac{C_{\textrm{LS}}\sqrt{C_G\, c_2}}{\theta } H(\tau ), \end{aligned}$$
(5.8)

for any \(s \in (\tau ,S)\). Recalling (5.5) and (iii), we conclude that

$$\begin{aligned} \left\| {\overline{{\textsf{N}}}}_s - {\overline{{\textsf{N}}}}_\tau \right\| _{L^2(\textrm{d}x)} \le \frac{r^{\frac{1}{\zeta }} }{\left( 100\, \lambda C_3\right) ^{\frac{1}{\zeta }}}. \end{aligned}$$

for any \(s \in (\tau ,S)\). Exploiting the interpolation inequality (5.3) with \(k=2\) we obtain

$$\begin{aligned} \begin{aligned} \left\| {\overline{{\textsf{N}}}}_s - {\overline{{\textsf{N}}}}_\tau \right\| _{H^2(\textrm{d}x)}&\le \frac{r}{100\, C_3} \left\| {\overline{{\textsf{N}}}}_s - {\overline{{\textsf{N}}}}_\tau \right\| _{H^3(\textrm{d}x)}^{1-\zeta } \\&\overset{(5.4)}{\le } \frac{r}{100\, C_3} (2C_3)^{1-\zeta } \\&\le \frac{r}{50}, \end{aligned} \end{aligned}$$

for any \(s \in (\tau ,S)\). Since \(\Vert {\overline{N}}_\tau \Vert _{H^2(\textrm{d}x)}<\tfrac{r}{2}\), a simple contradiction argument implies that \(S=T\) and \(\Vert {\overline{N}}_t\Vert _{H^2(\textrm{d}x)}<\tfrac{r}{2}+\tfrac{r}{50}\) for any \(t \in [\tau ,T)\). Hence Theorem 2.13 implies that \(T=+\infty \).

We claim that \(H(t)\searrow 0\) as \(t\rightarrow +\infty \). Indeed, since \(S=T=+\infty \), we now know that (5.4) holds for any time. Hence there exists a sequence of times \(t_n\rightarrow +\infty \) such that the parametrizations \({{\widetilde{\gamma }}}^i_{t_n}\) converge in \(C^2\) to limit parametrizations \({{\widetilde{\gamma }}}^i_\infty :=\gamma ^i_* + {\textsf{N}}^i_\infty \nu ^i_* + {\textsf{T}}^i_\infty \tau ^i_*\) with \({\overline{N}}_\infty \in B_r(0) \subset V\). Moreover, \({{\widetilde{\gamma }}}^i_\infty \) parametrize a minimal network \({{\widetilde{\Gamma }}}_\infty \). Hence using (3) and Corollary 3.14 the length of \({{\widetilde{\Gamma }}}_\infty \) has to be equal to the length of \(\Gamma _*\). As H is nonincreasing, then \(H(t)\searrow 0\) as \(t\rightarrow +\infty \).

Exploiting the fact that H(t) is infinitesimal as t diverges, estimating as in (5.8) for large times implies that the curve \({\overline{{\textsf{N}}}}_t\) is Cauchy in \(L^2(\textrm{d}x)\), and thus there exists its full limit \({\overline{{\textsf{N}}}}_\infty \) in \(L^2(\textrm{d}x)\) as \(t\rightarrow +\infty \). Interpolating using (5.3) and (5.4), we then conclude that convergence holds in \(H^m\) for any m.

We are now left to prove the claim (5.4). For the sake of clarity, we consider the case \(m=3\) only, the general case following by induction. We differentiate the curvature \(\widehat{{\varvec{k}}}^i_t\) of \({{\widehat{\gamma }}}^i_t\) and we multiply by the normal \({{\widehat{\nu }}}^i_t\) to get the identity

$$\begin{aligned} \begin{aligned} \langle \partial _x \widehat{{\varvec{k}}}^i_t,{{\widehat{\nu }}}^i_t\rangle&= \left\langle \partial _x \left( |\partial _x {{\widehat{\gamma }}}^i_t|^{-2} \partial _x^2 {{\widehat{\gamma }}}^i_t - |\partial _x {{\widehat{\gamma }}}^i_t|^{-4} \langle \partial ^2_x {{\widehat{\gamma }}}^i_t , \partial _x {{\widehat{\gamma }}}^i_t\rangle \partial _x {{\widehat{\gamma }}}^i_t\right) , {{\widehat{\nu }}}^i_t \right\rangle \\&= \left\langle |\partial _x {{\widehat{\gamma }}}^i_t|^{-2} \partial _x^3 {{\widehat{\gamma }}}^i_t -2|\partial _x {{\widehat{\gamma }}}^i_t|^{-4} \langle \partial ^2_x {{\widehat{\gamma }}}^i_t , \partial _x {{\widehat{\gamma }}}^i_t\rangle \partial _x^2 {{\widehat{\gamma }}}^i_t \right. \\&\left. \quad - |\partial _x {{\widehat{\gamma }}}^i_t|^{-4} \langle \partial ^2_x {{\widehat{\gamma }}}^i_t , \partial _x {{\widehat{\gamma }}}^i_t\rangle \partial _x^2 {{\widehat{\gamma }}}^i_t, {{\widehat{\nu }}}^i_t \right\rangle \\&= |\partial _x {{\widehat{\gamma }}}^i_t|^{-2} \langle \partial _x^3{\textsf{N}}^i_t \nu ^i_* + \partial _x^3 {\textsf{T}}^i_t \tau ^i_* , {{\widehat{\nu }}}^i_t\rangle \\&\quad - \left\langle 2|\partial _x {{\widehat{\gamma }}}^i_t|^{-4} \langle \partial ^2_x {{\widehat{\gamma }}}^i_t , \partial _x {{\widehat{\gamma }}}^i_t\rangle \partial _x^2 {{\widehat{\gamma }}}^i_t +|\partial _x {{\widehat{\gamma }}}^i_t|^{-4} \langle \partial ^2_x {{\widehat{\gamma }}}^i_t , \partial _x {{\widehat{\gamma }}}^i_t\rangle \partial _x^2 {{\widehat{\gamma }}}^i_t, {{\widehat{\nu }}}^i_t \right\rangle . \end{aligned} \end{aligned}$$
(5.9)

Taking absolute values and recalling (1), (2), we deduce that

$$\begin{aligned} \Vert \partial ^3_x {\textsf{N}}^i_t \Vert _{L^1(\textrm{d}x)} \le C(r,\Gamma _*)\left( 1 + \int \left| \partial _s \widehat{{\varvec{k}}}^i_t\right| \,\mathrm ds \right) , \end{aligned}$$

where \(C(r,\Gamma _*)>0\) here is a constant that may change from line to line.

Recalling [43, Proposition 5.8], we know that along a motion by curvature the \(L^2(\textrm{d}s)\)-norms of derivatives of the curvature are bounded by the \(L^2(\textrm{d}s)\)-norms of the curvature and by the inverse of the length of the edges. Hence the assumption \({\textsf{N}}_t \in B_r(0)\subset V\) guarantees that \(\int |\partial _s \widehat{{\varvec{k}}}^i_t| \,\mathrm ds \le C(r,\Gamma _*)\). In particular \(\Vert {\textsf{N}}^i_t \Vert _{W^{3,1}(\textrm{d}x)} \le C(r,\Gamma _*)\), and thus \(\Vert {\textsf{N}}^i_t \Vert _{W^{2,\infty }(\textrm{d}x)}\le C(r,\Gamma _*)\). Therefore we can improve the estimate on \(\partial _x^3 {\textsf{N}}^i_t\) by first taking squares and then integrating in (5.9), which yields

$$\begin{aligned} \Vert \partial _x^3 {\textsf{N}}^i_t\Vert _{L^2(\textrm{d}x)} \le C(r,\Gamma _*), \end{aligned}$$

thus proving the claim (5.4). \(\square \)

An immediate consequence is the next result, which promotes subconvergence of the motion by curvature to full convergence.

Theorem 1.36

Let \(\Gamma _t: G\rightarrow {\mathbb {R}}^2\) be a smooth motion by curvature defined on \([0,+\infty )\). Let \(\Gamma _\infty :G\rightarrow {\mathbb {R}}^2\) be a minimal network such that \(\Gamma _{t_n} \rightarrow \Gamma _\infty \) in \(H^2\) for some sequence \(t_n\nearrow +\infty \) as \(n\rightarrow +\infty \). Then \(\Gamma _{t} \rightarrow \Gamma _\infty \) smoothly as \(t\rightarrow +\infty \), up to reparametrization.

Proof

The statement immediately follows from Theorem 5.2. \(\square \)

We conclude this part by collecting some observations implied by the previous stability results.

Remark 5.4

Theorem 5.3 can be combined with [43, Proposition 13.5] in the following way. If \(\Gamma _t: G\rightarrow {\mathbb {R}}^2\) is a motion by curvature of a tree-like network, i.e., G has no cycles, defined on \([0,+\infty )\), if the sequential limit \(\Gamma _\infty \) along a sequence of times \(t_n\), which always exists by [43, Proposition 13.5], is regular, then \(\Gamma _\infty \) is the full limit of \(\Gamma _t\) as \(t\rightarrow +\infty \).

However, the example in the next section shows that in general the limit \(\Gamma _\infty \) may be degenerate.

Remark 5.5

If the network \(\Gamma _*\) in Theorem 5.2 is an isolated critical point of the length, then \(\Gamma _\infty \) coincides with \(\Gamma _*\). This is always the case if \(\Gamma _*\) is a tree, i.e., G has no cycles, since there exist finitely many minimal trees \({\widehat{\Gamma }}:G\rightarrow {\mathbb {R}}^2\) having the same endpoints of \(\Gamma _*\).

Remark 5.6

In the notation of Theorem 5.2, in some cases we are able to conclude that \(\Gamma _\infty \) coincides with \(\Gamma _*\), even if \(\Gamma _*\) is not an isolated critical point of the length.

Suppose that \(\Gamma _*\) is a minimal network composed of a regular hexagon H with area \(A_*\) and six straight segments connecting the vertices of a bigger regular hexagon. Then \(\Gamma _*\) is not an isolated critical point of the length, indeed there exists a one-parameter family of critical points with the same length: all networks composed of concentric hexagons and straight segments connecting the endpoints, see Fig. 3. It can be proved that there are no other minimal networks with this topology and with the same endpoints.

Fig. 3
figure 3

Three different minimal networks with the same endpoints and topology. All these networks have the same length

In the above notation, suppose now that \(\Gamma _0\) is regular network with the same endpoints and the same topology of \(\Gamma _*\), sufficiently close to \(\Gamma _*\) in \(H^2\), and such that the area enclosed by the loop equals \(A_*\). Then \(\Gamma _\infty \) coincides with \(\Gamma _*\). Indeed the area enclosed by a loop composed of six curves is preserved during the evolution (see [43, Section 8.2]) and \(\Gamma _*\) is the unique minimal network with area \(A_*\) among the one-parameter family of possible minimal networks.

6 Convergence to a degenerate network in infinite time

In this section we construct an example of a motion by curvature existing for all times, with uniformly bounded curvature, smoothly converging to a degenerate network. More precisely, there holds the following result.

Theorem 1.40

There exists a smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that the motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) exists for every time, the length of each curve \(\gamma ^i_t\) is strictly positive for any time, the curvature of each curve \(\gamma ^i_t\) is uniformly bounded from above, and \(\Gamma _t\) smoothly converges to a degenerate network \(\Gamma _\infty \) as \(t\rightarrow +\infty \), up to reparametrization. Specifically, the length of a distinguished curve \(\gamma ^0_t\) tends to zero as \(t\rightarrow +\infty \).

Proof

The proof of the statement follows by putting together the observations in Step 1, Step 2, and Step 3 below. \(\square \)

From now on and for the rest of this section, let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network as in Fig. 1. We assume that \(\Gamma _0\) is composed of five curves, it is symmetric with respect to horizontal and vertical axes, the middle curve \(\gamma ^0\) is a segment, and the remaining four curves are convex, i.e., their oriented curvature has a sign. Moreover, the network has four endpoints located at the vertices of a rectangle of sides of length \(2/\sqrt{3}\) and 2, so that the diagonals of the rectangle meets forming angles of \(\tfrac{2}{3} \pi \) and \(\tfrac{\pi }{3}\), see Fig. 1.

We want to show that the motion by curvature \(\Gamma _t\) starting from such a datum \(\Gamma _0\) satisfies the statement of Theorem 6.1. The candidate limit is given by the degenerate network defined by the diagonals of the rectangle, that is, the dotted lines in Fig. 1.

By symmetry, it is sufficient to study the evolution of the middle curve and of the two bottom curves in Fig. 1. To fix the notation, we recall such part of the graph in Fig. 4. Observe that the straight middle curve \(\gamma ^0\) is parametrized from the bottom to the top, while the convex curves \(\gamma ^1, \gamma ^2\) have the endpoint 1 at the junction. This is in contrast with the usual choice we adopted of setting endpoints 1 at the endpoints of the network, however we choose this parametrization here in order to simplify useless presence of minus signs in the computations below. Finally, we denote by

$$\begin{aligned} \omega :=(0,1), \end{aligned}$$

the vertical unit vector, coinciding with the tangent vector of the curve \(\gamma ^0\).

Fig. 4
figure 4

Bottom half of the motion by curvature starting from a network as in Fig. 1, specifying notation and orientation of the edges

Recalling Remark 2.12, we can assume that the motion by curvature is smooth and evolves by the special flow, i.e., \(\partial _t\gamma ^i_t= |\partial _x\gamma ^i_t|^{-2} \partial ^2_x \gamma ^i_t\) for any i. Decomposing \(\partial _t \gamma ^i_t\) in tangential and normal components, we denote

$$\begin{aligned} \partial _t\gamma ^i_t = {\widetilde{k}}_i \nu ^i_t + \lambda _i \tau ^i_t, \end{aligned}$$

where we denote by \({\widetilde{k}}_i\) the oriented curvature of \(\gamma ^i_t\), i.e., \({\widetilde{k}}_i:=\langle {\varvec{k}}^i_t, \nu ^i_t\rangle \). We drop subscript t in \({\widetilde{k}}_i\) and \(\lambda _i\) for ease of notation.

At least for short times, by choice of the initial datum, we can consider the functions \(v_i\) defined by

$$\begin{aligned} v_i:=\frac{1}{\langle \nu ^i_t,\omega \rangle }, \end{aligned}$$

for \(i=1,2\). We further assume that

$$\begin{aligned} \langle \tau ^1_t(0),\omega \rangle \big |_{t=0} >0. \end{aligned}$$
(6.1)

We preliminarily observe that, by symmetry and choice of orientations, we have \({\widetilde{k}}_1=-{\widetilde{k}}_2\) and \(\partial _s {\widetilde{k}}_1 = - \partial _s {\widetilde{k}}_2\) at any time and point. Moreover, symmetry and evolution of curvature imply that \(\gamma ^0_t\) is a vertical segment for any time; then \(\partial _t\gamma ^0_t(t,0)\) and \(\omega \) are parallel, hence \(\lambda _1(t,1)= \langle \partial _t\gamma ^0_t(t,0), \tau ^1_t\rangle = \langle \partial _t\gamma ^0_t(t,0), \tau ^2_t\rangle = \lambda _2(t,1)\) for any \(t\in [0,T)\). On the other hand, the boundary condition obtained by the derivative \(\partial _t \langle \tau ^1_t(1),\tau ^2_t(1)\rangle =0\), see [43], reads

$$\begin{aligned} \partial _s {\widetilde{k}}_1(t,1) + \lambda _1(t,1) {\widetilde{k}}_1(t,1) = \partial _s {\widetilde{k}}_2(t,1) + \lambda _2(t,1) {\widetilde{k}}_2(t,1) . \end{aligned}$$

Therefore we get that

$$\begin{aligned} \partial _s {\widetilde{k}}_i(t,1) + \lambda _i(t,1) {\widetilde{k}}_i(t,1) = 0, \end{aligned}$$
(6.2)

for \(i=1,2\) for any \(t \in [0,T)\). Finally, recalling from [43, Section 3] that tangential velocities at a junction can be expressed in terms of normal velocities, which easily follows from identity \(\partial _t \gamma ^1_t(1)=\partial _t \gamma ^2_t(1)\), we have that

$$\begin{aligned} \lambda _1(t,1) = - \frac{{\widetilde{k}}_2(t,1)}{\sqrt{3}} = \frac{{\widetilde{k}}_1(t,1)}{\sqrt{3}} , \qquad \qquad \lambda _2(t,1) = \frac{{\widetilde{k}}_1(t,1)}{\sqrt{3}} = -\frac{{\widetilde{k}}_2(t,1)}{\sqrt{3}} . \end{aligned}$$
(6.3)
  1. Step 1

    Letting \(T>0\) the maximal time of existence of the flow, we want to prove that the functions \(v_i\) are defined on [0, T) and

    $$\begin{aligned} {\widetilde{k}}_1 \ge 0 ,\qquad \qquad 1\le v_1 \le \frac{2}{\sqrt{3}}, \end{aligned}$$
    (6.4)

    for any \(x \in [0,1]\) and \(t \in [0,T)\). In particular, the curves \(\gamma ^1_t, \gamma ^2_t\) can be parametrized by convex graphs on a fixed interval for any time.

    By basic computations on the evolution of geometric quantities, see [17, 43], one easily obtains

    $$\begin{aligned} (\partial _t - \partial ^2_s) v_1 = -v_1({\widetilde{k}}_1)^2 -2 \frac{(\partial _s v_1)^2}{v_1} + \lambda _1 \partial _s v_1. \end{aligned}$$
    (6.5)

    Recalling that

    $$\begin{aligned} (\partial _t - \partial ^2_s) {\widetilde{k}}_1 = \lambda _1 \partial _s {\widetilde{k}}_1 + ({\widetilde{k}}_1)^3, \end{aligned}$$
    (6.6)

    we obtain

    $$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s)(v_1 {\widetilde{k}}_1) = \left[ \lambda _1 -2v_1 {\widetilde{k}}_1 \langle \tau ^1_t, \omega \rangle \right] \partial _s (v_1 {\widetilde{k}}_1) . \end{aligned} \end{aligned}$$

    Exploiting (6.2) and (6.3), we see that \((v_1 {\widetilde{k}}_1)\) satisfies a Neumann boundary condition at \(x=1\), that is

    $$\begin{aligned} \begin{aligned} \partial _s(v_1 {\widetilde{k}}_1) \big |_{x=1}&= v_1 \partial _s {\widetilde{k}}_1 + ({\widetilde{k}}_1)^2 (v_1)^2 \langle \tau ^1_t,\omega \rangle \, \big |_{x=1}\\&= -\frac{2}{\sqrt{3}}\lambda _1 {\widetilde{k}}_1 + ({\widetilde{k}}_1)^2 \left( \frac{2}{\sqrt{3}}\right) ^2 \frac{1}{2} \, \bigg |_{x=1} \\&= -\frac{2}{3} ({\widetilde{k}}_1)^2 + \frac{2}{3} ({\widetilde{k}}_1)^2 \, \bigg |_{x=1} = 0. \end{aligned} \end{aligned}$$

    Let \({\overline{T}}\le T\) be the maximal time such that \(v_1\) is well defined. For \(\varepsilon ,\delta >0\), we consider the function \(f:=v_1 {\widetilde{k}}_1 + \varepsilon t + \delta \). By the above observations and since \({\widetilde{k}}_1(t,0)=0\), then f satisfies

    $$\begin{aligned} {\left\{ \begin{array}{ll} (\partial _t - \partial ^2_s) f = \left[ \lambda _1 -2v_1 {\widetilde{k}}_1 \langle \tau ^1_t, \omega \rangle \right] \partial _s f + \varepsilon &{} \text {on } [0,{\overline{T}}) \times [0,1], \\ f(0,x) \ge \delta &{} \forall \, x \in [0,1],\\ f(t,0) \ge \delta &{} \forall \, t \in [0,{\overline{T}}), \\ \partial _s f (t,1) = 0 &{} \forall \, t \in [0,{\overline{T}}). \end{array}\right. } \end{aligned}$$

    By a standard argument involving the maximum principle, we can prove that \(f>0\) at any \((t,x) \in [0,{\overline{T}})\times [0,1]\). More precisely, if \({\overline{t}}>0\) is the first time such that there is \({\overline{x}}\) such that \(f({\overline{t}}, {\overline{x}}) =0\), then \({\overline{x}} \in (0,1]\). The case \({\overline{x}}=1\) is excluded as Hopf Lemma (see [53, Theorem 6, p. 174]) would imply \(\partial _s f({\overline{t}},1) <0\). Also the case \({\overline{x}}\in (0,1)\) leads to contradiction, as in this case \(0 \ge (\partial _t - \partial ^2_s) f ({\overline{t}},{\overline{x}}) \ge \varepsilon >0 \).

    Arbitrariness of \(\varepsilon ,\delta \) implies that \( v_1 {\widetilde{k}}_1\ge 0\) on \([0,{\overline{T}})\times [0,1]\). Since by continuity \(v_1\) must be strictly positive on \([0,{\overline{T}})\times [0,1]\), then \({\widetilde{k}}_1\ge 0\) on \([0,{\overline{T}})\times [0,1]\). Since convexity is preserved up to time \({\overline{T}}\) and recalling assumption (6.1), then

    $$\begin{aligned} \begin{aligned} \partial _t \langle \nu ^1_t,\omega \rangle |_{x=0}&= -\partial _s {\widetilde{k}}_1 \langle \tau ^1_t,\omega \rangle |_{x=0}\le 0, \\ \partial _s\langle \nu ^1_t,\omega \rangle&= \langle - {\widetilde{k}}_1 \tau ^1_t, \omega \rangle \le 0, \end{aligned} \end{aligned}$$

    where we used that \(\partial _s{\widetilde{k}}_1|_{x=0}\ge 0\) since \({\widetilde{k}}_1(0)=0\) is a global minimum for \({\widetilde{k}}_1\). Therefore the minimum of \(\langle \nu ^1_t,\omega \rangle \) is achieved at \(x=1\), that is \(\tfrac{\sqrt{3}}{2}= \langle \nu ^1_t,\omega \rangle |_{x=1} \le \langle \nu ^1_t,\omega \rangle \le 1\). The positive lower bound on \( \langle \nu ^1_t,\omega \rangle \) implies that \({\overline{T}}=T\) and completes the proof of the first step.

  2. Step 2

    We claim that there exists a constant \(C>0\) such that \({\widetilde{k}}_1 \le C\) for any \(t\in [0,T)\). Moreover, for any \(k\ge 1\) there is \(C_k>0\) such that \(\partial _s^k{\widetilde{k}}_1 \le C_k\) for any \(t\in [0,T)\).

    By the evolution equations for \(v_1\) and \({\widetilde{k}}_1\), we can compute

    $$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s) \left( (v_1)^2({\widetilde{k}}_1)^2\right)&= 2 \Big ( \tfrac{1}{2}\lambda _1 \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \\&\quad -(v_1)^2 (\partial _s {\widetilde{k}}_1)^2 - 3(\partial _s v_1)^2 ({\widetilde{k}}_1)^2 - \partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2) \Big ). \end{aligned} \end{aligned}$$
    (6.7)

    By Young inequality we estimate

    $$\begin{aligned} \begin{aligned}&-2\partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2)\\&\quad = - \partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2) - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad = - \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) \partial _s(v_1^2) \, v_1^{-2} + ( {\widetilde{k}}_1)^2 v_1^{-2} \big (\partial _s(v_1^2) \big )^2 - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad = -2 v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 4 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad \le -2 v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 4 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 + 2(v_1)^2 (\partial _s {\widetilde{k}}_1)^2 + 2({\widetilde{k}}_1)^2 (\partial _s v_1)^2 \\&\quad = 2 \Big ( - v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 3 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 + (v_1)^2 (\partial _s {\widetilde{k}}_1)^2 \Big ). \end{aligned} \end{aligned}$$

    Inserting in (6.7) we get

    $$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s) \left( (v_1)^2({\widetilde{k}}_1)^2\right)&\le 2 \Big ( \tfrac{1}{2}\lambda _1 \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) -v_1^{-1} \,\partial _s v_1 \, \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \Big ) \\&= \left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) . \end{aligned} \end{aligned}$$
    (6.8)

    Observe that \(v_1=v_2\) by symmetry, hence all the above considerations hold for \(v_2\) as well. We further consider

    $$\begin{aligned} g_i:=({\widetilde{k}}_i)^2 (v_i)^2, \end{aligned}$$

    for \(i=1,2\). Again, actually \(g_1=g_2\) by symmetry. Observe that

    $$\begin{aligned} g_1(t,0)=\partial _s g_1(t,0) = 0, \end{aligned}$$
    (6.9)

    as \({\widetilde{k}}_1(t,0)=0\), for any \(t \in [0,T)\). Moreover

    $$\begin{aligned} \begin{aligned} \partial _s g_1(t,1)&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (v_1)^2 + ({\widetilde{k}}_1)^2 v_1 (\partial _s v_1) \Big ) \, \Big |_{(t,1)} \\&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (v_1)^2 + ({\widetilde{k}}_1)^2 v_1 (v_1)^2 {\widetilde{k}}_1 \langle \tau ^1_t,\omega \rangle \Big ) \, \Big |_{(t,1)} \\&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (2/\sqrt{3})^2 + ({\widetilde{k}}_1)^3 (2/\sqrt{3})^3 (1/2) \Big ) \, \Big |_{(t,1)} \\&= \frac{8}{3}{\widetilde{k}}_1 \Big ( (\partial _s {\widetilde{k}}_1) + ({\widetilde{k}}_1)^2 /\sqrt{3} \Big ) \, \Big |_{(t,1)} =0, \end{aligned} \end{aligned}$$
    (6.10)

    where the last equality follows from (6.2) and (6.3). Obviously, \(\partial _s g_2(t,1) =0\) as well.

    Now take \(t_0 \in (0,T)\) and let \(p_0\in {\mathbb {R}}^2\) be the mid point of the image of the straight edge \(\gamma ^0\). Without loss of generality we can assume that \(p_0=0\) is the origin of \({\mathbb {R}}^2\). Hence let

    $$\begin{aligned} \rho (t,p) :=\frac{1}{\sqrt{4\pi (t_0-t)}} \exp \left( -\frac{|p|^2}{4(t_0-t)} \right) . \end{aligned}$$

    Denoting \(\rho \circ \gamma ^i_t:=\rho (t,\gamma ^i_t)\), we observe that

    $$\begin{aligned} \begin{aligned} -\partial _s(\rho \circ \gamma ^1_t) \,\big |_{(t,1)}&= -\left\langle \nabla \rho |_{(t,\gamma ^1_t(1))} , \tau ^1_t(1)\right\rangle = \frac{\rho \circ \gamma ^1_t}{2(t_0-t)} \langle \gamma ^1_t(1), \tau ^1_t(1)\rangle \le 0, \end{aligned} \end{aligned}$$
    (6.11)

    for any \(t \in (0,t_0)\), where the inequality follows by the choice of the origin of \({\mathbb {R}}^2\).

    Now let \(A:=\max _{[0,1]} ({\widetilde{k}}_1)^2 (v_1)^2\, \big |_{t=0}>0\) and define

    $$\begin{aligned} f_i(t,x) :=\left( \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \right) ^2. \end{aligned}$$

    Since \(F(y):=\left( \max \left\{ y -A, 0 \right\} \right) ^2\) is of class \(C^{1,1}\), then \(f_i(t,\cdot ) \in H^2\) for any t and chain rule holds almost everywhere, i.e., \(\partial _s f_i =2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _s(({\widetilde{k}}_i)^2 (v_i)^2)\) and \(\partial _s^2 f_i =2\left[ \partial _s(({\widetilde{k}}_i)^2 (v_i)^2) \right] ^2 + 2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _s^2(({\widetilde{k}}_i)^2 (v_i)^2)\) almost everywhere. Analogously, \(f_i\) is differentiable with respect to t at any (tx) and \(\partial _t f_i = 2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _t(({\widetilde{k}}_i)^2 (v_i)^2)\) is continuous on \([0,T)\times [0,1]\).

    Recalling (6.8) and using Young inequality we estimate

    $$\begin{aligned} \begin{aligned} (\partial _t - \partial _s^2) f_1&=2 \max \left\{ ({\widetilde{k}}_1)^2 (v_1)^2 -A, 0 \right\} (\partial _t - \partial _s^2) \big ( ({\widetilde{k}}_1)^2 (v_1)^2\big )\\&\quad -2\left[ \partial _s(({\widetilde{k}}_1)^2 (v_1)^2) \right] ^2 \\&\le 2 \max \left\{ ({\widetilde{k}}_1)^2 (v_1)^2 -A, 0 \right\} \left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \\&\quad -2\left[ \partial _s(({\widetilde{k}}_1)^2 (v_1)^2) \right] ^2\\&\le \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2 f_1, \end{aligned} \end{aligned}$$
    (6.12)

    for any t and almost every x. We apply the monotonicity-type formula from Lemma A.2 with \(f=f_1\) to get

    $$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t} \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds\le & {} \int _0^1 ( \rho \circ \gamma ^1_t)(\partial _t-\partial _s^2)f_1 \\{} & {} + \int _0^1 \left( \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\{} & {} +\big ( ( \rho \circ \gamma ^1_t)\partial _sf_1 - f_1 \partial _s ( \rho \circ \gamma ^1_t)\big )\bigg |_0^1, \end{aligned}$$

    for any \(t\in (0,t_0)\). Employing (6.9), (6.10), (6.11), and (6.12), we obtain

    $$\begin{aligned} \begin{aligned}&\frac{\textrm{d}}{\textrm{d}t} \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \le \int _0^1 \left( \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2+ \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \quad - f_1(t,1) \partial _s(\rho \circ \gamma ^1_t)\big |_{(t,1)} \\&\quad \overset{(6.11)}{\le } \int _0^1 \left( \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2+ \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \le C(t_0) \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds, \end{aligned} \end{aligned}$$
    (6.13)

    where \(C(t_0)>0\) is some constant depending on the flow and on the choice of \(t_0\). Since \(\int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \big |_{t=0} =0\) by definition of A, the differential inequality in (6.13) implies that \(\int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds =0\) for any \(t \in [0,t_0)\). This means that \(f_1(t,x)=0\) for any x and \(t \in [0,t_0)\). By arbitrariness of \(t_0\), we get that

    $$\begin{aligned} ({\widetilde{k}}_i)^2 (v_i)^2(t,x) \le \max _{[0,1]} ({\widetilde{k}}_1)^2 (v_1)^2\, \big |_{t=0}, \end{aligned}$$

    for any x and \(t \in [0,T)\), \(i=1,2\). Taking into account (6.4), the claimed uniform upper bound on \({\widetilde{k}}_i\) follows. The second part of the claim in Step 2 follows by adapting the above reasoning on derivatives \(\partial _s^k{\widetilde{k}}_i\) in place of \({\widetilde{k}}_i\) or, more easily, by observing that estimates on derivatives \(\partial _s^k{\widetilde{k}}_i\) are independent of the length of \(\gamma ^0_t\). Indeed, by locality and uniqueness of the flow, the evolution of \(\gamma ^1_t, \gamma ^2_t\) does not change if \(\gamma ^1_t, \gamma ^2_t\) are considered to be edges of a completely analogous network considered in Fig. 1 except that the length of \(\gamma ^0_0\) is taken arbitrarily large (see also the discussion in Remark 1.4). In such a case the upper bound previously proved on the curvature together with lower bounds away from zero on the length of each edge imply uniform bounds on the derivatives \(\partial _s^k{\widetilde{k}}_i\) (independently of \({\textrm{L}}(\gamma ^0_t)\)) by classical results like [43, Proposition 5.8].

  3. Step 3

    We want to show that the length of each curve is strictly positive for any time, \(T=+\infty \), the length of \(\gamma ^0_t\) converges to 0 as \(t\rightarrow +\infty \), and the curves \(\gamma ^1_t, \gamma ^2_t\) smoothly converge to (half of) the diagonals of the rectangle having vertices at the endpoints of the network, up to reparametrization.

    By Step 1, we can parametrize \(\gamma ^1_t\) as the graph of a function \(u:[0,T)\times [0,1]\rightarrow {\mathbb {R}}\), as in Fig. 5.

    Parametrizing as a graph as in Fig. 5 the evolution of an edge \(E^i\), whose parametrization evolves according to \(\partial \gamma ^i_t = \partial ^2_x\gamma ^i/|\partial _x\gamma ^i|^2\), the function u solves the problem

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u = \frac{\partial ^2_xu}{1+(\partial _x u)^2} &{} \text {for } (t,x) \in [0,T)\times [0,1], \\ u(t,0)=0 , \\ \partial _x u(t,1) = \tan (\pi /6) = 1/\sqrt{3}, \\ u(0,x)=u_0(x). \end{array}\right. } \end{aligned}$$

    By the above steps, \(\partial ^2_xu \ge 0\) and \(0\le \partial _x u \le \partial _x u(t,1)= 1/ \sqrt{3}\), for any \(t \in [0,T)\).

    We compare the evolution of u with upper and lower barriers given by solutions of heat-type equations. More precisely, as \(\partial ^2_xu\ge 0\) by convexity, we have that

    $$\begin{aligned} \frac{3}{4}\partial ^2_xu \le \partial _t u \le \partial ^2_xu, \end{aligned}$$

    at any time and point. Hence we define vw the solutions to the problems

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t v = \frac{3}{4}\partial ^2_xv &{} \text {on } [0,+\infty )\times [0,1], \\ v(t,0)=0 , \\ \partial _x v(t,1) = 1/\sqrt{3}, \\ v(0,x)=u_0(x). \end{array}\right. }\\ \qquad {\left\{ \begin{array}{ll} \partial _t w= \partial ^2_xw &{} \text {on } [0,+\infty )\times [0,1], \\ w(t,0)=0 , \\ \partial _x w(t,1) = 1/\sqrt{3}, \\ w(0,x)=u_0(x). \end{array}\right. } \end{aligned}$$

    It is well known that vw exist for any time and converge to the function \(u_\infty (x) :=x/\sqrt{3}\) with an exponential rate in infinite time.

    We can consider the function

    $$\begin{aligned} z(t,x):={\left\{ \begin{array}{ll} u(t,x)-w(t,x) &{}\quad x \in [0,1], \\ u(t,2-x)-w(t,2-x) &{}\quad x \in (1,2], \end{array}\right. } \end{aligned}$$

    which is the even reflection of the function \(u-w\) about the point \(x=1\). Hence z is of class \(C^2\) and solves

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t z \le \partial ^2_x u - \partial ^2_x w = \partial ^2_x z &{}\quad \text {on } [0,+\infty )\times [0,2], \\ z(t,0)=z(t,2)=0 , \\ z(0,x)=0 &{}\quad \forall \, x \in [0,2]. \end{array}\right. } \end{aligned}$$

    By the maximum principle, see [41, Theorem 2.1.1, Lemma 2.1.3], we get that \(z\le 0\) at any time and point, that is \(u(t,x)\le w(t,x)\).

    By analogous comparison with v, we deduce that \(v(t,x)\le u(t,x)\le w(t,x)\). Therefore the length of \(\gamma ^0_t\) is strictly positive for any \(t \in [0,T)\), which, together with Step 2 and Theorem 2.13, implies \(T=+\infty \). Moreover, the above comparison analysis completes the proof of Step 3.

Fig. 5
figure 5

Continuous line: graph parametrization of an edge of \(\Gamma _t\). Dotted line: straight limit curve of the flow