Abstract
We investigate stability properties of the motion by curvature of planar networks. We prove Łojasiewicz–Simon gradient inequalities for the length functional of planar networks with triple junctions. In particular, such an inequality holds for networks with junctions forming angles equal to \(\tfrac{2}{3}\pi \) that are close in \(H^2\)-norm to minimal networks, i.e., networks whose edges also have vanishing curvature. The latter inequality bounds a concave power of the difference between length of a minimal network \(\Gamma _*\) and length of a triple junctions network \(\Gamma \) from above by the \(L^2\)-norm of the curvature of the edges of \(\Gamma \). We apply this result to prove the stability of minimal networks in the sense that a motion by curvature starting from a network sufficiently close in \(H^2\)-norm to a minimal one exists for all times and smoothly converges. We further rigorously construct an example of a motion by curvature having uniformly bounded curvature that smoothly converges to a degenerate network in infinite time.
Similar content being viewed by others
1 Introduction
A planar network is a pair \((\Gamma ,G)\), where G is an abstract connected graph with edges homeomorphic to the interval [0, 1] and \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a continuous map, see Definition 2.2. We shall mostly consider triple junctions networks, that are networks such that the edges of G either meet in junctions of order three, or end on terminal points of the graph, and the restriction \(\gamma ^i\) of \(\Gamma \) to each edge is a \(C^1\)-embedding, see Definition 2.4. If we further require that embedded edges meet forming angles equal to \(\tfrac{2}{3}\pi \), the network is said to be regular, and if also such embeddings are straight segments, it is said to be minimal, see Definition 2.4.
Minimal networks are easily seen to be critical points of the length functional \({{\textrm{L}}}\), which is defined by the sum of the lengths of the embedded edges via the parametrization \(\Gamma \) of a network. This class includes Steiner trees of finitely many points in the plane, that is, networks minimizing the length among those connecting such given points [50].
In this paper we investigate functional analytic and stability properties of the length functional and of the \(L^2\)-gradient flow of \({{\textrm{L}}}\).
The first of our main results consists in proving Łojasiewicz–Simon gradient inequalities for the length functional in \(H^2\)-neighborhoods of minimal networks.
Theorem 1.1
(cf. Corollary 3.14) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exist \(C_{\textrm{LS}},\sigma >0\) and \(\theta \in (0,\tfrac{1}{2}]\) such that the following holds.
If \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network of class \(H^2\) such that \(\Gamma \) and \(\Gamma _*\) have the same endpoints and
where \(\gamma ^i_*, \gamma ^i\) are the restrictions of \(\Gamma _*, \Gamma \) to the i-th edge of G, respectively, then
where \({\varvec{k}}^i\) is the curvature of \(\gamma ^i\).
Estimates like (1.1) are named after Łojasiewicz and Simon due to their seminal works [38, 39, 58], where they firstly proved and employed analogous inequalities for analytic functionals over finite or infinite dimensional linear spaces. As we shall see, the validity of a Łojasiewicz–Simon inequality is sufficient to imply strong stability properties of critical points of the energy under consideration.
Theorem 1.1 is actually a particular case of a more general result yielding a Łojasiewicz–Simon inequality for the length functional among triple junctions networks, that is, one does not need to ask that edges form angles equal to \(\tfrac{2}{3}\pi \) at junctions to get a version of (1.1), see Theorem 3.13. As discussed in Remark 3.15, it is even possible to generalize the inequality to triple junctions networks letting endpoints free to vary.
Proving Theorem 1.1 as a consequence of a Łojasiewicz–Simon inequality holding among triple junctions networks not only gives an inequality for a much larger class of networks, but also simplifies its proof, since triple junctions networks do not need to satisfy an additional nonlinear requirement on the angles at junctions. Indeed the proof of Theorem 3.13, which implies Theorem 1.1, eventually follows employing a by now established method for proving these kinds of inequalities for extrinsic geometric functionals [10, 13, 45, 46, 52], the method relying on linear Functional Analysis.
Once one is able to parametrize the considered competitors as normal graphs over the critical point, the inequality eventually follows from a general functional analytic result, see Proposition 3.12, based on [9]. However, differently from the previous cases, the nonsmooth structure of networks necessarily implies technical complications, as networks close to a fixed minimal one \(\Gamma _*\) cannot be written as normal graphs over the critical point. Hence, in order to perform a graph parametrization of networks close to \(\Gamma _*\) we need to allow for graphs having both a normal and a tangential component with respect to \(\Gamma _*\). This would generally violate the assumptions needed to produce a Łojasiewicz–Simon inequality, cf. Proposition 3.12, since variations of \(\Gamma _*\) via tangential directions are equivalent to reparametrizations of the curves of a network and thus generate an infinite dimensional kernel for a geometric functional like the length. We fix this issue by prescribing that tangential components of these graph parametrizations linearly depend on normal ones. Since a relation between such normal and tangential components is naturally satisfied at the junctions, the chosen dependence of tangential components on normal ones is given by a suitable prolongation of the relations at junctions on the interior of the edges, see Proposition 3.4. We mention that an analogous construction has been also employed in [25].
It is possible to exploit the Łojasiewicz–Simon inequality in Theorem 1.1 for proving the stability of minimal networks with respect to the \(L^2\)-gradient flow of \({\textrm{L}}\), the so-called motion by curvature of networks. Along such flow, a regular network evolves keeping its endpoints fixed and moving with normal velocity equal to the curvature vector along each edge, see Sect. 2.3. The motion by curvature generalizes the one-dimensional mean curvature flow, called curve shortening flow, to the realm of singular one-dimensional objects given by planar networks.
Bronsard and Reitich [4] firstly attempted to find strong solutions to the motion by curvature, providing local existence and uniqueness of solutions for admissible initial regular networks of class \(C^{2+\alpha }\) with the sum of the curvature at the junctions equal to zero. Then the basic theory concerning short time existence and uniqueness of the motion by curvature was carried out in [43], and further improved in [26] in order prove existence of the flow starting from any regular network without extra assumptions on the initial datum. In [26] also the parabolic regularization of the flow has been addressed. It is known that the flow develops singularities, see Theorem 2.13, and a great deal of work has been done to understand the nature of these singularities and to define the flow past singularities [1, 7, 8, 30, 37, 40, 42,43,44].
However, exploiting the Łojasiewicz–Simon inequality we can prove that a flow starting sufficiently close to a minimal network in \(H^2\) exists for every time and smoothly converges to a (possibly different) minimal network. We mention that global existence of the flow starting close to critical points and convergence along diverging sequence of times has been firstly studied in [34]. Hence the next theorem recovers and improves the main results of [34], see also Theorem 5.3 below.
Theorem 1.2
(cf. Theorem 5.2) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exists \(\delta _{\Gamma _*}>0\) such that the following holds.
Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network having the same endpoints of \(\Gamma _*\) and such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2(\textrm{d}x)}\le \delta _{\Gamma _*}\). Then the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) starting from \(\Gamma _0\) exists for all times and it smoothly converges to a minimal network \(\Gamma _\infty \) such that \({\textrm{L}}(\Gamma _\infty )={\textrm{L}}(\Gamma _*)\), up to reparametrization.
The fact that \(\Gamma _\infty \) may be different from \(\Gamma _*\) in the above theorem cannot be avoided, as there exist examples of one-parameter families of minimal networks having fixed endpoints. Consider for instance networks given by concentric regular hexagons with segments connecting the six vertices to fixed vertices of a bigger regular hexagon, as depicted in Fig. 3 below.
Observe that in Theorem 1.2 the initial datum \(\Gamma _0\) does not need to satisfy further geometric properties at junctions or endpoints except being regular; this is possible as we shall employ the short time existence theory recently developed in [26], which removes the additional geometric assumptions required by previous existence theorems in [43, 44]. On the other hand, it is an open problem to understand whether a stability result like Theorem 1.2 holds for an initial datum \(\Gamma _0\) that is possibly non-regular, such as a \(\Gamma _0\) with only triple junctions but with angles possibly different from \(\tfrac{2}{3}\pi \) at junctions, and such that it is sufficiently close in \(H^2\) to a reference minimal network. For such a network \(\Gamma _0\), it is possible to define a motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) [30, 37] and \(\Gamma _t\) is instantaneously regular for \(t>0\), however the crucial short time properties we need (see Theorem 2.10 and Lemma 5.1) are delicate in this setting.
Finally, we observe that no minimizing properties on \(\Gamma _*\) in Theorem 1.2 are required. Instead, by means of a simple comparison argument it is possible to show that minimal networks automatically minimize the length among suitably small \(C^0\) perturbations, see Lemma 4.1. Once such minimality property is established, the proof of Theorem 1.2 follows adapting a general argument outlined in [45, 46].
As a consequence of Theorem 1.2, it immediately follows that if a motion by curvature smoothly converges to a minimal network along a sequence of times, then it smoothly converges as time increases, see Theorem 5.3.
The final main contribution of this work is given by the rigorous construction of an example of motion by curvature presenting a topological singularity in infinite time.
It is known, see Theorem 2.13, that the motion by curvature may develop singularities consisting in the blow up of the \(L^2\)-norm of the curvature or in the disappearance of a curve whose length tends to zero. There exist well known examples where both or one of the previous alternatives occur in finite time, see for instance [43] and [42, Section 6]. Here we firstly construct an example of a motion by curvature existing for every time and such that a curve of the evolving network vanishes in infinite time while the curvature remains uniformly bounded.
Theorem 1.3
(cf. Theorem 6.1) There exists a smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that the motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) exists for every time, the length of each curve \(\gamma ^i_t\) is strictly positive for any time, the curvature of each curve \(\gamma ^i_t\) is uniformly bounded from above, and \(\Gamma _t\) smoothly converges to a degenerate network \(\Gamma _\infty \) as \(t\rightarrow +\infty \), up to reparametrization. Specifically, the length of a distinguished curve \(\gamma ^0_t\) tends to zero as \(t\rightarrow +\infty \).
For proving the above theorem we will specifically consider the evolution of the network sketched in Fig. 1.
For \(\Gamma _0\) as in Fig. 1, the endpoints determine a rectangle whose diagonals intersect forming angles exactly equal to \(\tfrac{\pi }{3}\) and \(\tfrac{2}{3}\pi \). We will prove that the central curve in Fig. 1 shrinks along the flow in infinite time and the four remaining curves converge to the diagonals connecting the four endpoints.
We will study the motion by curvature starting from such \(\Gamma _0\) explicitly. The symmetry chosen for producing this example allows to generalize some ideas from [18] to the context of networks in order to uniformly bound the curvature along the motion exploiting a monotonicity-type formula, see Lemma A.2. Eventually a comparison with solutions of heat-type equations show that convergence occurs in infinite time.
We stress that the example of Theorem 1.3 yields a simple and explicit flow converging to a degenerate critical point of the length, implying that the topology of the evolving networks changes in the limit. It is the change of topology the fundamental reason why such an example has to be studied individually and whose convergence cannot follow from the Łojasiewicz–Simon inequalities proved in Theorem 1.1 or Theorem 3.13. This simple example motivates to look for improvements on the general method for proving convergence, as well as the search of possibly weaker variants of Łojasiewicz–Simon inequalities, able to take into account these changes of topology in the limit. This project goes beyond the scope of the present paper and it is left for future investigation.
We finally observe that the analysis carried out for the study of the example proving Theorem 1.3 actually provides a family of examples of evolving networks whose curvature is uniformly bounded presenting every possible long time behavior: collapsing in infinite time of the length of a curve, collapsing in finite time of the length of a curve, convergence in infinite time to a regular network. The first case is the one claimed in Theorem 1.3, the other cases are simple modifications of it and they are described more precisely in the next remark.
Remark 1.4
Let \(L>0\). Consider a smooth regular initial datum \(\Gamma _{0,L}\) completely analogous to the one in Fig. 1, whose four curves connected to the endpoints are always the same, but the initial length of the central vertical edge equals L at time zero. In particular there is \({\overline{L}}\) such that \(\Gamma _{0,{\overline{L}}}\) is exactly the one in Fig. 1, that is, the resulting rectangle determined by the endpoints has sides of length \(2/\sqrt{3}\) and 2.
It can be proved (see Step 1 below) that the convexity of the four external curves is preserved along the flow, and then the central curve remains vertical and its length is strictly decreasing. Moreover the curvature of each curve is uniformly bounded (see first part of Step 2 below) and, by comparison, each of the four external curves always remains on the same side of the suitable line; for instance, the evolved curve \(\gamma ^1_t\) stays below the line passing through the same endpoint of \(\gamma ^1_t\) forming an angle equal to \(\tfrac{\pi }{6}\) with the horizontal axis (see the argument in Step 3 below).
Let \(T_L>0\) be the maximal time of existence of the motion by curvature starting from \(\Gamma _{0,L}\). By uniqueness and locality of the flow, it is clear that the evolution of the four external curves of \(\Gamma _{t,L}\) coincide with the one of the four external curves of \(\Gamma _{t,L'}\) for any \(t \in [0,\min \{T_L,T_{L'}\})\) (in particular the evolution of such curves is independent of the length of the central curve).
It follows that for \(L>{\overline{L}}\) the length of the central curve is always bounded away from zero and then the above observations imply that \(T_L=+\infty \) and the flow smoothly converges in infinite time to a minimal network.
On the contrary, if \(L\in (0,{\overline{L}})\), the length of the central curve vanishes in finite time, leading to a topological singularity of the flow in finite time.
We conclude this introduction by mentioning some further contributions related to the topic.
The use of Łojasiewicz–Simon inequalities has become a prominent tool for understanding stability properties and convergence of geometric flows. Apart from the above mentioned references, let us also recall the recent results on the uniqueness of blow-ups for the mean curvature flow [11, 12, 57], and the application of the method to constrained high order extrinsic flows [25, 55, 56]. The use of these inequalities has seen successful applications also in the context of intrinsic geometric flows, namely in [6, 22].
Apart from stability results, which is the main focus of the current paper, there are several questions concerning the motion by curvature of networks: study of singularities, global existence, extension to class of weaker objects. As we said above there is an extensive amount of literature concerning the analysis of the flow in the framework of classical PDEs (see [43] and references within). There are also several generalized weak notions of the flow, for instance [3, 21, 32, 35, 59]. Recently interesting progresses has been made both in the direction of proving regularity of weak solution [32, 33] and in establishing the so-called weak-strong uniqueness theorem [23, 27].
Worth to mention the fact that the motion by curvature of network was first proposed for modelling reasons [49] and recently has again attracted the applied mathematical community [2, 19, 20, 31].
Organization. In Sect. 2 we collect basic definitions and results on networks and on the motion by curvature. In Sect. 3.3 we establish the graph parametrization of networks over minimal ones and we prove the Łojasiewicz–Simon inequality implying Theorem 1.1. In Sect. 4 we prove that minimal networks locally minimize the length in \(C^0\). Section 5 is devoted to the proof of the stability of minimal networks, implying Theorem 1.2. In Sect. 6 we prove Theorem 1.3 by analyzing the motion of networks like the one in Fig. 1. In Appendix A we collect some tools needed in the proofs, namely a well known quantitative implicit function theorem and a monotonicity-type formula. In Appendix B we discuss extensions of our results to the case of networks on Riemannian surfaces.
2 Preliminaries
2.1 Networks
For a regular curve \(\gamma :[0,1]\rightarrow {\mathbb {R}}^2\) of class \(H^2\), define
the tangent and the normal vector, respectively, where \(\textrm{R}\) denotes counterclockwise rotation of \(\tfrac{\pi }{2}\). We define \(\,\mathrm ds_\gamma :=|\gamma '| \,\mathrm dx\) the arclength element and \(\partial _s:=|\gamma '|^{-1}\partial _x\) the arclength derivative. The curvature of \(\gamma \) is the vector
We shall usually drop the subscript \(\gamma \) when there is no risk of confusion.
Fix \(N\in {\mathbb {N}}\) and let \(i\in \{1,\ldots , N\}\), \(E^i:=[0,1]\times \{i\}\), \(E:=\bigcup _{i=1}^N E^i\) and \(V:=\bigcup _{i=1}^N \{0,1\}\times \{i\}\).
Definition 1.5
Let \(\sim \) be an equivalence relation that identifies points of V. A graph G is the topological quotient space of E induced by \(\sim \), that is
and we assume that G is connected.
Definition 1.6
A (planar) network is a pair \({\mathcal {N}}=(G,\Gamma )\) where
is a continuous map and G is a graph. We say that \({\mathcal {N}}\) is of class \(W^{k,p}\) (resp. \(C^{k,\alpha }\)) if each map \(\gamma ^i:=\Gamma _{\vert E^i}\) is either a constant map (singular curve) or a regular curve of class \(W^{k,p}\) (resp. \(C^{k,\alpha }\) up to the boundary). A network is smooth if it is of class \(C^\infty \). A network is degenerate if there is at least one singular curve.
Denoting by \(\pi :E\rightarrow G\) the projection onto the quotient, an endpoint is a point \(p \in G\) such that \(\pi ^{-1}(p) \subset V\) and it is a singleton, a junction is a point \(m \in G\) such that \(\pi ^{-1}(m) \subset V\) and it is not a singleton. The order of a junction if the cardinality \(\sharp \pi ^{-1}(m)\).
We denote by \(J_G\), and by \(P_G\) respectively, the set of junctions, and endpoints respectively, of a graph G. A graph G is said to be regular if each junction has order 3.
Without loss of generality, if \({\mathcal {N}}=(G,\Gamma )\) is a network and \(p \in G\) is an endpoint with \(\pi ^{-1}(p)=(e,i)\), we will implicitly assume that \(e=1\).
Definition 1.7
Let \({\mathcal {N}}=(G,\Gamma )\) be a network of class \(C^1\) and let \(e \in \{0,1\}\). The inner tangent vector of a regular curve \(\gamma ^i\) of \({\mathcal {N}}\) at e is the vector
In this paper we will be interested in the classes of triple junctions networks and regular networks.
Definition 1.8
A network \({\mathcal {N}}=(G,\Gamma )\) is a triple junctions network if G is regular and each map \(\gamma ^i:=\Gamma _{\vert E^i}\) is a regular embedding of class \(C^1\), for every \(i\ne j\) the curves \(\gamma ^i\) and \(\gamma ^j\) do not intersect in their interior, and \(\pi (0,i)\ne \pi (1,i)\) for any i.
A network \({\mathcal {N}}=(G,\Gamma )\) is said to be regular if it is a triple junctions network such that whenever \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) is a junction then any two inner tangent vectors of \(\gamma ^i,\gamma ^j,\gamma ^k\) at \(e^i,e^j,e^k\), respectively, form an angle equal to \(\tfrac{2}{3} \pi \).
A network \({\mathcal {N}}=(G,\Gamma )\) is said to be minimal if it is regular and the curvature of the parametrization of each edge is identically zero. Moreover, we assume that the parametrizations of minimal networks have constant speed.
We shall usually denote a network by directly writing the map \(\Gamma :G\rightarrow {\mathbb {R}}^2\).
2.2 Function spaces
We introduce the space \(W^{1,2}_p\), which is a natural choice to define the motion by curvature of networks.
For \(T>0\), \(N \in {\mathbb {N}}\) with \(N\ge 1\), \(p \in (3,\infty )\), we define
Remark 2.5
Elements in the space \(W^{1,2}_p\) are functions \(f\in L^p\left( (0,T); L^p(0,1)\right) \) possessing one distributional derivative with respect to time \(\partial _t f\in L^p\left( (0,T); L^p(0,1)\right) \). Furthermore, for almost every \(t\in (0,T)\), the function f(t) lies in \(W^{2,p}(0,1)\) and thus has two spacial derivatives \(\partial _x (f(t))\), \(\partial _x ^2\left( f(t)\right) \in L_p(0,1)\). One easily sees that the functions \(t\mapsto \partial _x^k(f(t))\) for \(k\in \{1,2\}\) lie in \(L_p\left( (0,T);L_p(0,1)\right) \).
The space \(W^{1,2}_p\) is defined as the intersection of two Bochner spaces, which are, in this case, Sobolev spaces of functions defined on a measure space with values in a Banach space. We remind that
and
Let \(\Gamma _t:[0,T)\times G\rightarrow {\mathbb {R}}^2\) be a time-dependent network parametrized by \((\gamma ^1_t,\ldots ,\gamma ^N_t)\) with \(\gamma ^i_t\in W^{1,2}_p\). We shall denote \((G,\Gamma _t)\) by the symbol \((\mathcal N_t)_t\) and
where
We also need to introduce a suitable space for initial data. Fixed \(p\in (3,\infty )\) the Sobolev–Slobodeckij space \(W^{2-\nicefrac {2}{p},p}\left( (0,1);{\mathbb {R}}^{2N}\right) \) is defined by
with
We define the \(W^{2-\nicefrac {2}{p}, p}\)-norm of a network \(\Gamma _0\) by
where
Remark 2.6
The temporal trace of the space \(W^{1,2}_p\) is the space \(W^{2-\nicefrac {2}{p},p}\). Since we would like to set the problem in \(W^{1,2}_p\), it is then natural to chose \(W^{2-\nicefrac {2}{p},p}\) as the space for the initial data. Moreover we ask \(p\in (3,\infty )\) to write the Herring condition at the junction. Indeed, for any \(T>0\), \(p\in (3,\infty )\) and \(\alpha \in \left( 0,1-\nicefrac {3}{p}\right] \) we have the continuous embeddings
The first embedding follows from [15, Lemma 4.4], the second is an immediate consequence of the Sobolev Embedding Theorem. Again by the Sobolev Embedding Theorem for \(p\in (3,6)\) we have the compact embedding
2.3 Motion by curvature of planar networks
In this section we introduce the basic definitions and known results on the motion by curvature. Let \(p\in (3,\infty )\) be fixed.
Definition 1.11
(Admissible initial datum) A network \({\mathcal {N}}_0=(G,\Gamma _0)\) is an admissible initial datum for the motion by curvature if it is regular network of class \(W^{2-\nicefrac {2}{p},p}\).
Definition 1.12
(Solutions to the motion by curvature) Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be an admissible initial datum with \(P^i=\Gamma _0(p^i)\in {\mathbb {R}}^2\). A one-parameter family of regular networks \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\), for \(t \in [0,T)\), is a solution to the motion by curvature with initial datum \(\Gamma _0\) if the parametrizations \(\gamma ^i_t\) of \(\Gamma _t\) satisfy
and the collection of parametrizations \((\gamma ^1_t,\ldots ,\gamma ^N_t)\) belongs to \(W^{1,2}_p\big ((0,T)\times (0,1); {\mathbb {R}}^{2N} \big )\), with \(\gamma ^i_t|_{t=0}=\gamma ^i_0\) for any i.
The solution is assumed to be maximal, i.e., we ask that it does not exist another solution defined on \([0,{\widetilde{T}})\) with \({\widetilde{T}}>T\).
Remark 2.9
We stress the fact that the evolving network must be regular for every time \(t\in [0,T)\). From the PDE point of view this means that the system (2.1) is a boundary value problem with coupled boundary conditions: whenever \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\) is a junction then for all \(t\in [0,T)\)
We quote here a parade of results on the motion by curvature of network that are relevant for the sequel of this paper.
Theorem 1.14
(Short time existence and parabolic smoothing [26]) If \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) is an admissible initial datum, then there exists a solution \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) to the motion by curvature starting from \(\Gamma _0\), for \(t \in [0,T)\). The solution is unique up to reparametrizations, it is smooth on \([\varepsilon ,T-\varepsilon ]\times G\) for any \(\varepsilon >0\), and \(\gamma ^i_t\rightarrow \gamma ^i_0\) in \(C^1([0,1])\) for \(t\rightarrow 0^+\) for any i.
Moreover, for any \(c_1,c_2>0\), there are \(\tau =\tau (c_1,c_2),M=M(p,c_1,c_2)>0\) such that if
then \(T\ge \tau \) and the solution \({\mathcal {N}}\) satisfies \(\Vert ({\mathcal {N}}_t)_t\Vert _{W^{1,2}_p} \le M\) for any \(t \in [0,\tau ]\).
To show existence of solutions one finds a unique solution to the special flow, i.e., the evolution determined by the non-degenerate parabolic second order equation
Clearly \(\left\langle \partial _t \gamma ^i_t,\nu ^i \right\rangle \nu ^i= {\varvec{k}}^i\), so a solution to the special flow is in particular a solution to the motion by curvature. Uniqueness up to reparametrizations is obtained by showing that any solution of the network flow can be obtained as a reparametrization of the solution of the special flow.
Remark 2.11
Theorem 2.10 yields existence and uniqueness of a solution in the sense of Definition 2.8 starting from any regular network of class \(W^{2-\nicefrac {2}{p},p}\). This is a great advantage in comparison with the theory firstly developed in [4, 43], where an initial datum \(\Gamma _0\) parametrized by \(\gamma ^i_0\) is required not only to be regular and \(C^2\), but also suitably geometrically compatible, that is such that
at any endpoint \(p \in G\) and at any junction \(m = \pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\), where \(k^n\) is the oriented curvature of \(\gamma ^n\) for any n.
Remark 2.12
We stress that for any admissible initial datum in Theorem 2.10, the results in [26] imply that we can take as solution to the motion by curvature exactly the solution to the special flow. More precisely, for any positive time the parametrizations \(\gamma ^i_t\) of the solution verify the Eq. (2.2) with no need of reparametrize the solution or the initial datum. We will always assume that the solution to the motion by curvature satisfies (2.2) whenever nothing different is specified.
As a consequence, the solution satisfies the analytic compatibility conditions of every order (see [43, Definition 4.7, Definition 4.16]). In particular, the following compatibility conditions of order two hold:
at any endpoint \(p \in G\) and at any junction \(m = \pi (e^i,i)=\pi (e^j,j)=\pi (e^\ell ,\ell )\).
The fact that for any positive times we can take a smooth solution to the special flow is a key point in our analysis, as it allows to apply the classical results and to use all the estimates derived in [40, 43, 44].
In the next statement we recall the possible singularities happening at a singular time.
Theorem 1.17
(Long time behavior [43]) Let \(({\mathcal {N}}_t)_t\) be a solution to the motion by curvature with initial datum \(\Gamma _0\) in the time interval [0, T). Then either
or as \(t\rightarrow T\) at least one of the following happens:
-
(i)
the limit inferior of the length of at least one curve of the network is zero;
-
(ii)
the limit superior of the \(L^2\)-norm of the curvature is \(+\infty \).
As mentioned in the introduction, the possibilities listed in the above theorem are not mutually exclusive.
3 Łojasiewicz–Simon inequalities
3.1 Graph parametrization of regular networks
We will employ the following notation. Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network. Denote by \(\gamma ^i_*\) the parametrization of the i-th edge of \(\Gamma _*\), and by \(\tau ^i_*,\nu ^i_*\) the relative unit tangent and normal vectors. Whenever \(m:=\pi (e^i,i)=\pi (e^j,j)\) is a junction, denote
Observe that \(\alpha ^{ij}_m=\langle \nu ^i_*(e^i),\nu ^j_*(e^j)\rangle \), \(\alpha ^{ij}_m=\alpha ^{ji}_m\) and \(\beta ^{ij}_m=-\beta ^{ji}_m\).
We derive the necessary conditions holding at junctions for the parametrizations of a triple junctions network written as a graphs over a regular one.
Lemma 1.18
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network and \(\Gamma :G\rightarrow {\mathbb {R}}^2\) be a network such that at a junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) we have
for some constants \({\textsf{N}}^\ell (e^\ell ),{\textsf{T}}^\ell (e^\ell ) \in {\mathbb {R}}\).
Then the following relations hold
In particular
Proof
Let \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) be a junction of \(\Gamma \). Then \(\gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\), that is
Multiplying (3.5) by \(\tau ^i_*(e^i)\) we get
Hence analogously
Combining (3.7) and (3.8) we obtain
that gives the first line in (3.2). The remaining identities in (3.2) follow analogously.
Denoting by \(p=\Gamma (m)= \gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\) the image of the junction, since
we get
that is (3.3). Now plugging (3.3) into (3.2) readily implies (3.4). \(\square \)
If now \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) is regular, in the next lemma we state the sufficient conditions that functions \({\textsf{N}}^\ell , {\textsf{T}}^\ell \) may satisfy to define a triple junctions network as a graph over \(\Gamma _*\).
Lemma 1.19
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a regular network. Then there exists \(\varepsilon _{\Gamma _*}>0\) such that for every \({\textsf{N}}^\ell ,{\textsf{T}}^\ell \in C^1([0,1])\), with \(\Vert {\textsf{N}}^\ell \Vert _{C^1}, \Vert {\textsf{T}}^\ell \Vert _{C^1} \le \varepsilon _{\Gamma _*}\) fulfilling at any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) the identities
the maps
define a triple junctions network.
Proof
It is sufficient to check that whenever \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) is a junction, then \(\gamma ^i(e^i)=\gamma ^j(e^j)=\gamma ^k(e^k)\), namely, we have to check that the three vectors \({\textsf{N}}^i(e^i)\nu ^i_*(e^i) + {\textsf{T}}^i(e^i)\tau ^i_*(e^i)\), \({\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j)\) and \({\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k)\) coincide. To this aim, observe that the identities in the assumptions imply that \({\textsf{T}}^i(e^i), {\textsf{T}}^j(e^j), {\textsf{T}}^k(e^k)\) satisfy (3.2). Taking scalar products of \({\textsf{N}}^j(e^j)\nu ^j_*(e^j) + {\textsf{T}}^j(e^j)\tau ^j_*(e^j)\) and \({\textsf{N}}^k(e^k)\nu ^k_*(e^k) + {\textsf{T}}^k(e^k)\tau ^k_*(e^k)\) with \(\tau ^i_*(e^i)\) and \(\nu ^i_*(e^i)\), exploiting (3.2) one easily checks that
\(\square \)
The previous Lemmas 3.1 and 3.2 motivate the following definition.
Definition 1.20
Let G be a regular graph. At any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), if \(i<j<k\), we denote by \(L^i,L^j,L^k\) the linear maps
for any \((a,b)\in {\mathbb {R}}^2\). Moreover, we denote by \(I_m\) the set of indices \(\ell \) such that \(E^\ell \) has an endpoint at m, and we denote by \(e^\ell _m\in \{0,1\}\) the endpoint of \(E^\ell \) at m, for \(\ell \in I_m\).
Furthermore, for \(\ell \in I_m\), we denote by \({\mathscr {L}}^\ell _m\) the linear operator \(L^i,L^j,L^k\), depending on whether \(\ell \) is the minimal, intermediate, or maximal index \(\ell \in I_m\).
Finally, for any endpoint p, we denote by \(i_p\) the corresponding index such that p is an endpoint of \(E^{i_p}\).
We are ready for proving the existence of a canonical graph parametrization of triple junctions networks over regular ones that are close in \(H^2\)-norm. As discussed in the introduction, we shall perform the construction fixing a dependence of the tangential component of the graph with respect to the normal one. Such dependence is naturally defined by suitably extending to the interior of the edges the relations we found holding at junctions in Lemmas 3.1 and 3.2.
For this purpose, from now on and for the rest of the paper, we fix a nonincreasing smooth cut-off function
Proposition 1.21
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there exists \(\varepsilon _{\Gamma _*}>0\) such that whenever \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a triple junctions network of class \(H^2\) such that
for any i, then there exist functions \({\textsf{N}}^i, {\textsf{T}}^i \in H^2(\textrm{d}x)\) and reparametrizations \(\varphi ^i:[0,1]\rightarrow [0,1]\) of class \(H^2(\textrm{d}x)\) such that
At any junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), there holds
for \(x \in [0,\tfrac{1}{2}]\).
If \(\pi (1,i)\) is an endpoint, then
for \(x \in [\tfrac{1}{2},1]\).
Moreover
-
for any \(\delta >0\) there is \(\varepsilon \in (0,\varepsilon _{\Gamma _*})\) such that
$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{H^2(\textrm{d}x)} \le \varepsilon \quad \implies \quad \sum _i \Vert {\textsf{N}}^i\Vert _{H^2(\textrm{d}x)}+ \Vert \varphi ^i(x)-x\Vert _{H^2(\textrm{d}x)} \le \delta ; \end{aligned}$$(3.13) -
for any \(\eta >0\) and \(m\in {\mathbb {N}}\) there is \(\varepsilon _{\eta ,m}\in (0,\varepsilon _{\Gamma _*})\) such that if \(\sum _i \Vert \gamma ^i_* - \gamma ^i \Vert _{C^{m+1}([0,1])} \le \varepsilon _{\eta ,m}\), then
$$\begin{aligned} \sum _i \Vert {\textsf{N}}^i\Vert _{H^m(\textrm{d}x)} \le \eta . \end{aligned}$$(3.14)
Proof
Without loss of generality, we can perform the construction at a junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), assuming \(e^i=e^j=e^k=0\). For the sake of clarity, we show how to construct \(\varphi ^i,{\textsf{N}}^i\) and \(\varphi ^j,{\textsf{N}}^j\) on \([0,\frac{1}{2}]\) only, the complete proof being a straightforward adaptation.
Consider the function \(F:[0,\frac{1}{2}]\times {\mathbb {R}}^2\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}^4\) given by
Reflecting by symmetry, we can assume that F is also defined in an open neighborhood of \(x=0\). Since \(\{\tau ^i_*(0),\nu ^i_*(0)\}\) (and \(\{\tau ^j_*(0),\nu ^j_*(0)\}\)) is a basis of \({\mathbb {R}}^2\), there exist unique numbers \({\textsf{N}}^i(0),{\textsf{T}}^i(0)\) (and \({\textsf{N}}^j(0),{\textsf{T}}^j(0)\)) such that \(\gamma ^i(0) = \gamma ^i_*(0) + {\textsf{N}}^i(0)\nu ^i_*(0) + {\textsf{T}}^i(0)\tau ^i_*(0)\) (and \(\gamma ^j(0) = \gamma ^j_*(0) + {\textsf{N}}^j(0)\nu ^j_*(0) + {\textsf{T}}^j(0)\tau ^j_*(0)\)). Since \(\Gamma \) is a triple junctions network, by Lemma 3.1, we have that \(F(0,{\textsf{N}}^i(0),0,{\textsf{N}}^j(0),0)=(0)\). Moreover, the matrix
satisfies
It is readily checked that any two columns are linearly independent. Hence we can apply the implicit function theorem to get the existence of \(\varphi ^i,{\textsf{N}}^i\) and \(\varphi ^j,{\textsf{N}}^j\) defined on some interval \([0,\xi ]\subset [0,1/2]\) such that
From the identity
we estimate
Since \({\textsf{N}}^i = \langle \gamma ^i\circ \varphi ^i -\gamma ^i_*, \nu ^i_*\rangle \), and analogously for j, recalling (3.9) we get
We claim that
Indeed, suppose by contradiction that there is \(\delta >0\) and a sequence of triple junctions networks \(\Gamma _n\) such that \(\sum _i\Vert \gamma ^i_* - \gamma ^i_n \Vert _{H^2(\textrm{d}x)} \le 1/n \), but the implicit functions \(\varphi ^i_n:[0,\xi _n]\rightarrow [0,1/2]\) obtained as above do not verify (3.17). Denote by \(F_n,M_n, {\textsf{N}}^i_n,{\textsf{N}}^j_n\) the map, the matrices, and the functions defined by the above procedure applied on the network \(\Gamma _n\) in place of \(\Gamma \). Since \(\sum _i\Vert \gamma ^i_* - \gamma ^i_n \Vert _{H^2(\textrm{d}x)} \le 1/n \), there are \(S,\rho >0\) independent of n such that
whenever \(|x|<\rho \) and \(|(n^i,y^i,n^j,y^j) - ({\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)|<\rho \). Furthermore, since \(\partial _x F_n (x,n^i,y^i,n^j,y^j)\) does not depend on n, there is \(N>0\) such that
whenever \(|x|<\rho \) and \(|(n^i,y^i,n^j,y^j) - ({\textsf{N}}_n^i(0),0,{\textsf{N}}_n^j(0),0)|<\rho \), for any n. Hence the assumptions of Theorem A.1 are satisfied, and thus there is \({{\overline{\xi }}}>0\) such that \(\xi _n\ge {{\overline{\xi }}}\) for any n. Then it must be that \(\Vert \varphi ^i(x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} > \delta \).
Up to subsequence, recalling the uniform bounds (3.16), we can pass to the limit \(n\rightarrow \infty \) in the identity
to obtain
where \(\varphi ^i_n\rightarrow \varphi ^i_\infty \), \({\textsf{N}}^i_n\rightarrow {\textsf{N}}^i_\infty \), and \({\textsf{N}}^j_n\rightarrow {\textsf{N}}^j_\infty \) in \(C^0([0,{{\overline{\xi }}}])\) and in \(H^1(0,{{\overline{\xi }}})\), and (3.18) holds pointwise on \([0,{{\overline{\xi }}}]\). By the uniqueness part of the implicit function theorem, we deduce that \(\varphi ^i_\infty (x)\equiv x\), \({\textsf{N}}^i_\infty (x)\equiv 0\), and \({\textsf{N}}^j_\infty (x)\equiv 0\).
Moreover, by (3.15), uniform convergence on the right hand side implies that \(\varphi ^i_n(x)\rightarrow x\), \({\textsf{N}}^i_n\rightarrow 0\), and \({\textsf{N}}^j_n\rightarrow 0\) in \(C^1([0,{{\overline{\xi }}}])\).
Hence \(0=\Vert \varphi ^i_\infty (x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} =\lim _n\Vert \varphi ^i_n(x)-x\Vert _{W^{1,\infty }(0,{{\overline{\xi }}})} >\delta \) gives a contradiction, and (3.17) follows.
By (3.17), up to decreasing \(\varepsilon _{\Gamma _*}\), then \((\varphi ^i)'\ge \tfrac{1}{2}\) on \((0,{{\overline{\xi }}})\) for any i. Hence further differentiating (3.15) and arguing as before, we also derive that
Since \({\overline{\xi }}\) only depends on \(\varepsilon _{\Gamma _*}\), we can iterate finitely many times the above argument to get the complete construction on the interval \([0,\frac{1}{2}]\) as claimed.
Moreover, (3.19) eventually implies (3.13). Similarly, further differentiating (3.15) leads to (3.14). \(\square \)
Arguing as in Proposition 3.4, one obtains the following analogous consequence for the parametrization of a time dependent family of networks in a neighborhood of a fixed one \(\Gamma _*\).
Corollary 1.22
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there exist \(\varepsilon _{\Gamma _*}>0\) such that whenever \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) is a one-parameter family of triple junctions networks of class \(H^2\), differentiable with respect to t for \(t \in [t_0-h,t_0+h]\) and \(h>0\), such that \((t,x)\mapsto \partial _t \gamma ^i_t(x)\) is continuous for any i and
for any i, t, then there exist \(h'\in (0,h)\) and functions \({\textsf{N}}^i_t, {\textsf{T}}^i_t \in H^2(\textrm{d}x)\) and reparametrizations \(\varphi ^i_t:[0,1]\rightarrow [0,1]\) of class \(H^2(\textrm{d}x)\), continuously differentiable with respect to t for \(t \in [t_0-h',t_0+h']\) such that
At any junction \(\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\), there holds
for \(x \in [0,\tfrac{1}{2}]\).
If \(\pi (1,i)\) is an endpoint, then
for \(x \in [\tfrac{1}{2},1]\).
Moreover
-
for any \(\delta >0\) there is \(\varepsilon \in (0,\varepsilon _{\Gamma _*})\) such that
$$\begin{aligned} \sum _i \Vert \gamma ^i_* - \gamma ^i_t \Vert _{H^2(\textrm{d}x)} \le \varepsilon \quad \forall \,t \quad \implies \quad \sum _i \Vert {\textsf{N}}^i_t\Vert _{H^2(\textrm{d}x)}+ \Vert \varphi ^i_t(x)-x\Vert _{H^2(\textrm{d}x)} \le \delta , \end{aligned}$$(3.20)for any \(t \in [t_0-h',t_0+h']\);
-
for any \(\eta >0\) and \(m\in {\mathbb {N}}\) there is \(\varepsilon _{\eta ,m}\in (0,\varepsilon _{\Gamma _*})\) and \(h_{\eta ,m} \in (0,h)\) such that if \(\sum _i \Vert \gamma ^i_* - \gamma ^i_t \Vert _{C^{m+1}([0,1])} \le \varepsilon \) for any t, then
$$\begin{aligned} \sum _i \Vert {\textsf{N}}^i_t\Vert _{H^m(\textrm{d}x)} \le \eta , \end{aligned}$$(3.21)for any \(t \in [t_0-h_{\eta ,m}, t_0 + h_{\eta ,m}]\).
The construction of the “tangent functions” \({\textsf{T}}^i\)’s in Proposition 3.4 and Corollary 3.5 depending on the “normal functions” \({\textsf{N}}^i\)’s motivates the next definition.
Definition 1.23
(Adapted tangent functions) Let G be a regular graph. Let \({\textsf{N}}^i,{\textsf{T}}^i:[0,1]\rightarrow {\mathbb {R}}\) be functions of class \(C^1\), for \(i=1,\ldots ,N\). We say that the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s whenever there hold the relations (3.11) and (3.12).
More explicitly, the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s whenever
for \(x \in [0,\tfrac{1}{2}]\) for any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) with \(i<j<k\), and
for \(x \in [\tfrac{1}{2},1]\) for any endpoint \(\pi (1,i)\).
3.2 First and second variations
In order to derive the desired Łojasiewicz–Simon inequality, we need to compute first and second variations of the length functional taking variations determined by graph parametrizations over regular networks with tangent functions adapted to normal functions as in Definition 3.6.
Proposition 1.24
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network. Then there is \(\varepsilon _{\Gamma _*}>0\) such that the following holds.
Let \({\textsf{N}}^i, X^i \in H^2\) with \(\Vert {\textsf{N}}^i\Vert _{H^2}\le \varepsilon _{\Gamma _*}\) such that
Let \(\Gamma ^\varepsilon :G\rightarrow {\mathbb {R}}^2\) be the triple junctions network defined by
for any i, for any \(|\varepsilon |<\varepsilon _0\) and some \(\varepsilon _0>0\), where the \({\textsf{T}}^{i,\varepsilon }\)’s are adapted to the \(({\textsf{N}}^i+ \varepsilon X^i)\)’s, for any \(|\varepsilon |<\varepsilon _0\).Footnote 1
Call \(\Gamma \) the network given by the immersions \(\gamma ^i:=\gamma ^{i,0}\). Then
where \(f_{ij}, g_{ij}, h_{\ell j} \in {\mathbb {R}}\) depend on the topology of G.
If also \(\Gamma \) is regular and \(\gamma ^{i,\varepsilon }(p)=\gamma ^i_*(p) \) for any i at any endpoint p, then
Proof
Let us assume first that there is a junction m such that the functions \(X^\ell \) appearing in (3.23) all vanish except for \(\ell \in I_m\). Moreover, for \(\ell \in I_m\), assume that \(e^\ell _m=0\) and that \(X^\ell \) has compact support in \([0,\tfrac{5}{8})\).
Let us denote \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\), where \(i<j<k\). By differentiating the length functional we get
indeed, since \(\textrm{spt}\,X^\ell \subset [0,\tfrac{5}{8})\), by (3.22) and definition of \(\chi \), we have that \({\textsf{T}}^{n,\varepsilon }\) does not depend on \(\varepsilon \) for all \(n \not \in I_m\). Moreover
for \(\ell \in I_m\), hence, letting
we find
Since \(\gamma ^{\ell ,\varepsilon }(0)=\gamma ^{l,\varepsilon }(0)\) for any \(\varepsilon \) and \(\ell ,l \in I_m\), then \(Y^\ell (0)=Y^l(0)\) for any \(\ell ,l \in I_m\). Hence, if \(\Gamma \) is regular, then the boundary term \(\sum _{\ell \in I_m} \langle \tau ^\ell (0),Y^\ell (0)\rangle =0\).
Employing (3.28) we get
Suppose now that there is an endpoint \(p \in P_G\) such that the functions \(X^\ell \) appearing in (3.23) all vanish except for \(\ell = i_p\). Moreover, assume that \(X^{i_p}\) has compact support in \((\tfrac{3}{8},1]\). Hence \(Y^{i_p}:=\partial _\varepsilon \gamma ^{i_p,\varepsilon } = X^{i_p}\nu ^{i_p}_*\) in this case, and the same computation performed above now yields
which takes the form given in (3.24). In case \(\gamma ^{i,\varepsilon }(p)=\gamma ^i_*(p) \) for any i at any endpoint p, then \({\textsf{N}}^{i_p}(1)=X^{i_p}(1)=0\), and (3.25) follows as well.
Considering now arbitrary variations as in (3.23), then (3.24) follows in the general case observing that the formula is linear with respect to the \(X^i\)’s and that each \(X^i\) can be written as \(X^i= \eta X^i + (1-\eta ) X^i\) in a way that \(\textrm{spt} (\eta X^i) \subset [0,\tfrac{5}{8})\) and \(\textrm{spt} ((1-\eta ) X^i )\subset (\tfrac{3}{8},1]\), recalling also that \(\partial _\varepsilon \gamma ^{i,\varepsilon }(p)=0\) at any endpoint p. Additive terms of the form \(g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) X^i (x)\) appear by changing variables in order to factor out the function \(X^i(x)\) in the i-th integral. \(\square \)
Proposition 1.25
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \( X^i, Z^i \in H^2\) such that
Let \(\Gamma ^{\varepsilon ,\eta }:G\rightarrow {\mathbb {R}}^2\) be the triple junctions network defined by
for any i, for any \(|\varepsilon |, |\eta |<\varepsilon _0\) and some \(\varepsilon _0>0\), where the \({\textsf{T}}^{i,\varepsilon ,\eta }\)’s are adapted to the \((\varepsilon X^i+\eta Z^i)\)’s, for any \(|\varepsilon |,|\eta |<\varepsilon _0\).Footnote 2
Then
where \(\partial _s X^i= |\partial _x\gamma ^i_*|^{-1}\partial _x X^i\) and \(\partial _s Z^i= |\partial _x\gamma ^i_*|^{-1}\partial _x Z^i\) for any i.
Proof
By Definitions 3.6 and 3.3, for any i we have that
where the \({\textsf{T}}^{i}_X\)’s are adapted to the \(X^i\)’s, and the \({\textsf{T}}^{i}_Z\)’s are adapted to the \(Z^i\)’s. Denoting \(\gamma ^{i,\varepsilon }:=\gamma ^{i,\varepsilon ,0}\), we compute
Since \(\partial _x \tau ^i_*=\partial _x \nu ^i_*=0\) as \(\Gamma _*\) is minimal, we get
Integrating by parts, the claim follows. \(\square \)
3.3 Łojasiewicz–Simon inequalities for minimal networks
We need to set up a functional analytic framework for proving the desired Łojasiewicz–Simon inequalities.
For a fixed minimal network \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\), we denote by \(M:=\sharp J_G\) and \(P:=\sharp P_G\), and we define the Banach spaces
endowed with \(\Vert {\overline{{\textsf{N}}}}\Vert _V^2 :=\sum _i \Vert {\textsf{N}}^i\Vert _{H^2}^2\), and
endowed with the product norm, where
and \(W_m\) is endowed with the Euclidean scalar product.
Observe that \(\textrm{j}:V\hookrightarrow Z\) compactly with the natural injection
For \(r_{\Gamma _*}>0\) small enough, we also define the energy \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) by
where the \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s (see Definition 3.6). We observe that, according to Lemma 3.2, the immersions \(\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \) define a triple junctions network.
Corollary 1.26
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).
Then the following hold.
-
1.
The first variation \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) is \(Z^\star \)-valued by setting
$$\begin{aligned} \begin{aligned} \delta {{\textbf {L}}}&({\overline{{\textsf{N}}}})[((v^\ell _m), {\overline{X}})]\\&= \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \left[ \langle \tau ^\ell (e^\ell _m), \nu ^\ell _*(e^\ell _m)\rangle + \sum _{j \in I_m} h_{\ell j}\langle \tau ^j(e^j_m) , \tau ^j_*(e^j_m)\rangle \right] v^\ell _m \\&\quad - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| \\&\quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx, \end{aligned} \end{aligned}$$(3.34)where \(f_{ij}, g_{ij}, h_{\ell j} \in {\mathbb {R}}\) depend on the topology of G, and \(\tau ^i, {\varvec{k}}^i\) are referred to the immersions \(\gamma ^i :=\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i\tau ^i_*\), with \({\textsf{T}}^i\) adapted to \({\textsf{N}}^i\).
If also, the network defined by the immersions \(\gamma ^i\) is regular, then
$$\begin{aligned} \begin{aligned} \delta {{\textbf {L}}} ({\overline{{\textsf{N}}}})[((v^\ell _m), {\overline{X}})]&= - \sum _i \int _0^1 \bigg ( \langle {\varvec{k}}^i,\nu ^i_*\rangle |\partial _x \gamma ^i| + \sum _j f_{ij}\chi \langle {\varvec{k}}^j,\tau ^j_*\rangle |\partial _x \gamma ^j| \\&\quad + g_{ij}\chi (1-x) \langle {\varvec{k}}^j,\tau ^j_*\rangle (1-x)|\partial _x \gamma ^j|(1-x) \bigg ) X^i \,\mathrm dx, \end{aligned} \end{aligned}$$(3.35) -
2.
The second variation \(\delta ^2 {{\textbf {L}}}_0: V\rightarrow Z^\star \) at 0 is \(Z^\star \)-valued by setting
$$\begin{aligned} \begin{aligned} \delta ^2 {{\textbf {L}}}_0 ( {\overline{X}} ) [((v^\ell _m), {\overline{Z}}) ]&= \sum _{m \in J_G} \sum _{\ell \in I_m} (-1)^{1+e^\ell _m} \partial _s X^\ell (e^\ell _m) v^\ell _m \\&\quad - \sum _i \int _0^1 \bigg (|\partial _x \gamma ^i_*| \partial ^2_s X^i \bigg ) Z^i \,\mathrm dx, \end{aligned} \end{aligned}$$(3.36)where \(\partial _s X^n= |\partial _x\gamma ^n_*|^{-1}\partial _x X^n\) for any n.
Proof
For the sake of precision, we maintain \(Z^\star \) and \(\textrm{j}^\star (Z^\star )\) distinct in this proof.
The first item follows by Proposition 3.7. Let \({\overline{{\textsf{N}}}},{\overline{X}}\in V\). Equation (3.24) yields the expression for \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \in V^\star \), and we notice that, since \({\overline{X}}\in V\), the sum over endpoints \(p \in P_G\) in (3.24) vanishes. Hence (3.24) shows that there exists an element \(\nabla {{\textbf {L}}}({\overline{{\textsf{N}}}})\) of Z such that \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}})[{\overline{X}}] = \langle \nabla {{\textbf {L}}}({\overline{{\textsf{N}}}}), \textrm{j}({\overline{X}})\rangle _{Z}\). Letting \({\textrm{I}}:Z\rightarrow Z^\star \) the natural isometry, this means
that is, \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \in \textrm{j}^\star (Z^\star )\), and (3.34) follows as well. By the same reasoning, (3.35) follows from (3.25).
The second item analogously follows from Proposition 3.8. In this case we notice that the sum over endpoint \(p \in P_G\) in (3.29) vanishes whenever \({\overline{Z}} \in V\), leading to (3.36). \(\square \)
Now we start checking that the assumptions needed to imply a Łojasiewicz–Simon inequality hold, see Proposition 3.12. We start from the analyticity of the functional and of its first variation.
Lemma 1.27
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}, r_{\Gamma _*}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).
Then the maps \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) and \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) are analytic.
Proof
The claim easily follows by recalling that multilinear continuous maps are analytic and that sum and compositions of analytic maps are analytic. Moreover if \(T_j:U\subset B\rightarrow C_j\), for \(j=1,2\), is analytic from an open set U of a Banach space B into a Banach space \(C_j\), and \(\cdot :C_1\times C_2\rightarrow D\) is a bilinear continuous map into a Banach space D, then the “product operator” \(T(v,w):=T_1(v)\cdot T_2(w)\) is analytic from U into D.
Concerning analyticity of \({{\textbf {L}}}\) we need to check that
is analytic for any i. Since the \({\textsf{T}}^i\)’s are adapted, they depend linearly on the \({\textsf{N}}^i\)’s, moreover differentiation with respect to x is linear and continuous from V to \([H^1(0,N)]^N\). Also, for \(r_{\Gamma _*}\) sufficiently small, we have that \( \left| \partial _x \left( \gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_* \right) \right| \ge c_*>0\), for \(c_*\) depending on \(\Gamma _*, r_{\Gamma _*}\) only. Finally integration is linear and continuous on \(L^1(0,1)\). Putting together all these observations, we get that \({\varvec{L}}\) is analytic.
The analyticity of \(\delta {{\textbf {L}}}: V\rightarrow Z^\star \) follows by completely analogous observations, recalling the expression in (3.34). Indeed, one can check that tangent and curvature vectors to an immersion \(\gamma ^i_* + {\textsf{N}}^i \nu ^i_* + {\textsf{T}}^i \tau ^i_*\) depend analytically on the parametrization, and then on \({\textsf{N}}\) (see for example the analogous treatment in [13, Section 3.1, Appendix B]). Moreover, the trace operator evaluating a tangent vector \(\tau ^\ell \in H^1(0,1)\) at junction points is linear and continuous. Recalling that product operators of analytic maps are analytic, the analyticity of \(\delta {{\textbf {L}}}\) follows. \(\square \)
Now we need to prove that the second variation is Fredholm of index zero. We recall that a continuous linear operator T between Banach spaces is Fredholm of index zero if its kernel has finite dimension, its image has finite codimension, and such dimensions are equal.
Lemma 1.28
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let \(V,Z,{{\textbf {L}}}\) as above, and identify \(Z^\star \) with \(\textrm{j}^\star (Z^\star )\subset V^\star \), for \(\textrm{j}\) as in (3.32).
Then the second variation \(\delta ^2 {{\textbf {L}}}_0: V\rightarrow Z^\star \) at 0 is a Fredholm operator of index zero.
Proof
Denote by \({\textrm{I}}:Z\rightarrow Z^\star \) the natural isometry. Recalling (3.36), we see that the claim follows as long as we can prove that the following operator is Fredholm of index 0:
Let
and let \(( (V^\ell _m), {\overline{Z}}) \in Z\) be fixed. We consider the operator \(F: V_1 \rightarrow {\mathbb {R}}\) given by
We can endow \(V_1\) with the scalar product \(\langle {\overline{X}},{\overline{Y}}\rangle :=\sum _i \int _0^1 \partial _s X^i \partial _s Y^i + X^iY^i \,\mathrm ds\), where \(\textrm{d}s=\textrm{d}s_{\gamma ^i_*}\) along the ith edge. Hence \(F:(V_1,\langle \cdot ,\cdot \rangle )\rightarrow {\mathbb {R}}\) is linear and continuous, and then there exists a unique \({\overline{X}} \in V_1\) such that
for any \({\overline{Y}} \in V_1\). Testing on \({\overline{Y}} \in V_1\) such that \(Y^i\equiv 0\) for all i except for a fixed index j, and \(Y^j \in C^1_c(0,1)\), we see that
which implies that \(X^j \in H^2(0,1)\) with \(-\partial ^2_s X^j + X^j = Z^j\), and thus \({\overline{X}}\) belongs to V.
For \(m\in J_G\), we can now take \({\overline{Y}} \in V_1\) with \(Y^\ell \equiv 0\) for all \(\ell \) except for \(\ell \in I_m\), with \(Y^\ell \in C^1\) vanishing at the endpoint of \(E^\ell \) different from the junction m. Integration by parts in (3.37) then gives
Arbitrariness of \({\overline{Y}}\) implies that \(\sum _{\ell \in I_m} \left( (-1)^{1+ e^\ell _m} \partial _s X^\ell (e^\ell _m) - V^\ell _m \right) v^\ell =0\) for any triple \(\{ v^\ell \ :\ \ell \in I_m, \, \sum _{\ell \in I_m} (-1)^{e^\ell _m} v^\ell =0 \}\). This means that there exists a constant \(\alpha _m \in {\mathbb {R}}\) such that
Multiplying by \((-1)^{e^\ell _m}\) and summing over \(\ell \) implies that \(3 \alpha _m = - \sum _{\ell \in I_m} \partial _s X^\ell (e^\ell _m)\), and then
Therefore, we have proved that for arbitrary \(( (V^\ell _m), {\overline{Z}}) \in Z\) there exists a unique \({\overline{X}} \in V\) satisfying
Therefore, if we further define the linear and continuous operator \({\mathscr {F}}: V \rightarrow Z^\star \) given by
where \({\textrm{I}}:Z\rightarrow Z^\star \) is the natural isometry, we see that \({\mathscr {F}}\) is invertible, and thus it is Fredholm of index 0.
Recall that Fredholmness is stable under compact perturbations: a linear operator T between Banach spaces is Fredholm of index l if and only if \(T+K\) is Fredholm of index l, for any compact operator K (see [28, Section 19.1]). Therefore, since
is compact, we conclude that
is Fredholm of index 0 as well, completing the proof. \(\square \)
We can now apply the following abstract result stating sufficient conditions implying a Łojasiewicz–Simon gradient inequality.
Proposition 1.29
([52, Corollary 2.6]) Let \(E:B_{\rho _0}(0)\subseteq V \rightarrow {\mathbb {R}}\) be an analytic map, where V is a Banach space. Suppose that 0 is a critical point for E, i.e., \(\delta E_0 = 0\). Assume that there exists a Banach space Z such that \(V\hookrightarrow Z\), the first variation \(\delta E: B_{\rho _0}(0)\rightarrow Z^\star \) is \(Z^\star \)-valued and analytic and the second variation \(\delta ^2 E_0: V \rightarrow Z^\star \) evaluated at 0 is \(Z^\star \)-valued and Fredholm of index zero.
Then there exist constants \(C,\rho _1>0\) and \(\theta \in (0,1/2]\) such that
for every \(v \in B_{\rho _1}(0) \subseteq V\).
The above functional analytic result is a corollary of the useful theory developed in [9] and it has been independently observed in [54].
Theorem 1.30
(Łojasiewicz–Simon inequality at minimal networks) Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Let V, Z be as in (3.30), (3.31), and define \({{\textbf {L}}}:B_{r_{\Gamma _*}} (0) \subset V\rightarrow [0,+\infty )\) as in (3.33).
Then there exist \(C_{\textrm{LS}}>0\), \(\theta \in (0,\tfrac{1}{2}]\), and \(r \in (0,r_{\Gamma _*}]\) such that
for any \({\overline{{\textsf{N}}}} \in B_r(0)\subset V\).
Proof
The proof immediately follows by applying Proposition 3.12 recalling Lemmas 3.10 and 3.11. \(\square \)
We can finally derive the following more explicit Łojasiewicz–Simon inequality for regular networks.
Corollary 1.31
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exist \(C_{\textrm{LS}},\sigma >0\) and \(\theta \in (0,\tfrac{1}{2}]\) such that the following holds.
If \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network of class \(H^2\) such that
then
Proof
For \(\sigma \) small enough, applying Proposition 3.4 and recalling (3.13), we know that there exist functions \({\textsf{N}}^i, {\textsf{T}}^i \in H^2(\textrm{d}x)\), where \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s, and reparametrizations \(\varphi ^i:[0,1]\rightarrow [0,1]\) such that
Moreover, by (3.39), Lemma 3.1, and up to decreasing \(\sigma \), we have that \({\overline{{\textsf{N}}}}:=({\textsf{N}}^1,\ldots ,{\textsf{N}}^N)\) belongs to the ball \(B_r(0)\subset V\), where r, V are as in Theorem 3.13.
For \({{\textbf {L}}}, Z\) as in Theorem 3.13, since \(\Gamma \) is regular, by (3.35) we get that
Since \({\textrm{L}}(\Gamma )={{\textbf {L}}}({\overline{{\textsf{N}}}})\) and the \(L^2(\textrm{d}s)\) norm of the curvature on the right hand side of (3.3) does not depend on the parametrization, the above estimate together with Theorem 3.13 imply (3.40). \(\square \)
Remark 3.15
(Further Łojasiewicz–Simon inequalities at minimal networks) By an adaptation of the above arguments, we expect to be possible to prove a Łojasiewicz–Simon inequality at minimal networks taking into account also variations at endpoints.
More precisely, removing the constraint \({\textsf{N}}^{i_p}(1)=0\) for \({\overline{{\textsf{N}}}} \in V\) in (3.30), and considering \({\widetilde{Z}}:={\mathbb {R}}^P \times Z\), for Z as in (3.31), employing the variation formulae in Propositions 3.7 and 3.8, one can consider triple-junctions networks \(\Gamma \) in a neighborhood of a minimal one \(\Gamma _*\) having endpoints different from those of \(\Gamma _*\).
Arguing as in the above propositions, one eventually deduces an analog of Theorem 3.13. The resulting statement would formally read exactly as Theorem 3.13, but in this case the norm \(\left\| \delta {{\textbf {L}}}({\overline{{\textsf{N}}}}) \right\| _{Z^\star }\) on the right hand side of the inequality also counts contributions from the varied endpoints. More precisely, all the terms in the first variation formula (3.25) representing the operator \(\delta {{\textbf {L}}}({\overline{{\textsf{N}}}})\) do not vanish in general and thus contribute to its norm.
4 Minimal networks locally minimize length
In this section we provide a simple proof of the fact that minimal networks are automatically local minimizers for the length with respect to perturbations sufficiently small in \(C^0\).
More precisely, we say that a regular network \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) locally minimizes the length in \(C^0\) if there exists \(\eta >0\) such that \({\textrm{L}}(\Gamma ) \ge {\textrm{L}}(\Gamma _*)\) whenever \(\Gamma :G\rightarrow {\mathbb {R}}^2\) is a regular network having the same endpoints of \(\Gamma _*\) and such that \(\Vert \gamma ^i\circ \sigma ^i-\gamma ^i_*\Vert _{C^0} < \eta \), for some reparametrizations \(\sigma ^i\).
We mention that more general minimality properties of minimal networks can be proved, see [24, 47, 48, 51, 60].
Lemma 1.33
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then \(\Gamma _*\) locally minimizes the length in \(C^0\).
Proof
For any \(r>0\) and for any junction \(m=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) of G, let \(T_{r,m}\) be the closed equilateral triangle having \(\Gamma _*(m)\) as barycenter and whose sides have length r and are orthogonal to the inner tangent vectors at m, that is, the vectors \((-1)^{e^i}\tau ^i(e^i)\), \((-1)^{e^j}\tau ^j(e^j)\), \((-1)^{e^k}\tau ^k(e^k)\).
Now fix \(r>0\) small enough such that the set \(T_{r,m} \cap \Gamma _*(G)\) is a standard triod for any junction m, i.e., such set is given by the union of three straight segments of the same length having one end in common forming angles equal to \(\tfrac{2}{3}\pi \) (see Fig. 2).
Let \(\Gamma :G\rightarrow {\mathbb {R}}^2\) be a smooth regular network with the same endpoints of \(\Gamma _*\). If, up to reparametrization, the immersions defining \(\Gamma \) are close to the ones of \(\Gamma _*\) in \(C^0\), then for any edge \(E_i\) if, say, \(m=\pi (0,i)\) and \(m'=\pi (1,i)\) are two junctions, we can fix times \(0<t_{i,1}<t_{i,2}<1\) such that \(t_{i,1}\) is the last time \(\gamma ^i\) intersects \(\partial T_{r,m}\) and \(t_{i,2}\) is the first time \(\gamma ^i\) intersects \(\partial T_{r,m'}\). Such intersections define points close to \((\partial T_{r,m} ) \cap \Gamma _*(G)\) and \((\partial T_{r,m'} ) \cap \Gamma _*(G)\). In case \(\pi (0,i)\) is an endpoint, we set \(t_{i,1}=0\).
In order to complete the proof, if, say, \(m=\pi (0,i)=\pi (0,j)=\pi (0,k)\) is a junction, it is sufficient to prove that the length of \(\Gamma _*\) in \(T_{r,m}\) is smaller than the sum \(\sum _{\ell =i,j,k} {\textrm{L}}(\gamma ^\ell |_{(0,t_{\ell ,1})})\). Indeed, \(\Gamma _*(G)\setminus \cup _m T_{r,m}\) is given by straight segments orthogonal to the sides of the triangles \(T_{r,m}\) and whose endpoints lay either on parallel sides of different triangles \(T_{r,m'}, T_{r,m''}\), or on a side of a triangle \(T_{r,m}\) and on an endpoint \(\Gamma _*(p)\) of the network. Hence the length of \(\Gamma _*\) outside \( \cup _m T_{r,m}\) is automatically smaller than the sum of the lengths of the curves of \(\Gamma \) on intervals \((t_{1,i},t_{2,i})\).
Eventually, the argument reduces to prove that the length of a standard triod \({\mathbb {T}}\) whose endpoints are the mid points of the sides of an equilateral triangle is the least possible among the length of topological triods having endpoints on the sides of the same triangle close to the ones of \({\mathbb {T}}\) (see Fig. 2). Up to scaling and translation, let us assume that the endpoints of a standard triod are located at points \((-1,0), (1,0), (0,\sqrt{3})\) in the plane. Hence the endpoints of a competitor triod take the form \(A=(-1,0)+s(-\tfrac{1}{2},\tfrac{\sqrt{3}}{2})\), \(B=(1,0)+t(\tfrac{1}{2},\tfrac{\sqrt{3}}{2})\), \(C=(x,\sqrt{3})\) for s, t, x close to zero (see Fig. 2). The length of the competitor triod in greater or equal than the one of the Steiner tree joining A, B, C, which is another topological triod \({\mathbb {S}}\) whose total length can be shown to be equal to the length of the segment CT, where T is the point (farthest from C) such that points A, B, T are vertices of an equilateral triangle (see Fig. 2 and [50]). In the end, the proof follows if we prove that \({\textrm{L}}({\mathbb {T}}) \le {\textrm{L}}(CT)\).
In our choice of coordinates we have that \({\textrm{L}}(\mathbb T)=2\sqrt{3}\). On the other hand we have that \(T=A + \textrm{R}(B-A)\), where \(\textrm{R}\) is the clockwise rotation of an angle equal to \(\tfrac{\pi }{3}\). Hence
Then \({\textrm{L}}(CT)^2 = (t-s-x)^2+(-\sqrt{3}-\sqrt{3})^2 \ge (2\sqrt{3})^2 = {\textrm{L}}({\mathbb {T}})^2\), which completes the proof. \(\square \)
5 Stability and convergence
In this section we prove our main stability theorem. First we need the next technical lemma, which is based on a simple contradiction argument implying that the motion by curvature starting sufficiently close to a minimal network \(\Gamma _*\) in \(H^2\) passes as close as prescribed to \(\Gamma _*\) in \(C^k\) at some positive time.
Lemma 1.34
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Hence, for any \(\eta >0\) and \(k \in {\mathbb {N}}\) there exists \({\overline{\varepsilon }}={\overline{\varepsilon }}(\Gamma _*,\eta ,k)>0\) such that the following holds.
For any smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2} < {\overline{\varepsilon }}\), the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\), for \(t \in [0,T)\), starting from \(\Gamma _0\) satisfies
for some \(\tau \in (0,T)\) and smooth reparametrizations \(\sigma ^i\), for any i.
Proof
Suppose by contradiction that there are \(\eta >0,k \in {\mathbb {N}}\) and a sequence of smooth regular networks \(\Gamma _{n,0}:G\rightarrow {\mathbb {R}}^2\) such that \(\Vert \gamma ^i_{n,0}-\gamma ^i_*\Vert _{H^2} < 1/n\), but the motions by curvature \(\Gamma _{n,t}:G\rightarrow {\mathbb {R}}^2\), defined on maximal intervals \([0,T_n)\) and starting from \(\Gamma _{n,0}\), satisfy
for any \(t \in (0,T_n)\) and any reparametrizations \(\sigma ^i_t\), where \(\sigma ^i_t\) is smooth with respect to x.
By Theorem 2.10, since \(\Vert \gamma ^i_{n,0}-\gamma ^i_*\Vert _{H^2} \rightarrow 0\) for any i as \(n\rightarrow \infty \), there exists \(T>0\) such that \(T_n> 2 T\) for any n. Moreover the solutions \({\mathcal {N}}_n\) of the motion by curvature starting from \(\Gamma _{n,0}\) satisfy a uniform bound \(\Vert {\mathcal {N}}_n \Vert _{W^{1,2}_5}\le M=M(\Gamma _*)\). By the compact embedding \(W^{1,2}_5\hookrightarrow W^{1,2}_4\), it follows that, up to subsequence, the solutions \(\gamma ^i_{n,t}\) converge in \(W^{1,2}_4\left( (0,T)\times (0,1);{\mathbb {R}}^2\right) \) to limit immersions \(\gamma ^i_{\infty ,t}\). Moreover, \(\gamma ^i_{n,0}\rightarrow \gamma ^i_{\infty ,0}=\gamma ^i_*\) in \(H^2\) and passing to the limit at almost every t, x in
we deduce that the maps \(\gamma ^i_{\infty ,t}\) give a solution to the motion by curvature starting from \(\Gamma _*\). Since \(\Gamma _*\) is minimal, then \(\gamma ^i_{\infty ,t}\) actually coincides with \(\gamma ^i_*\) up to reparametrization.
From the uniform bound in \(W^{1,2}_5\), we can fix \(s\in (0,T)\) such that \(\gamma ^i_{n,s}\rightarrow \gamma ^i_{\infty ,s}\) in \(H^2\) for any i. Hence the \(L^2(\textrm{d}s)\)-norm of the curvature of \(\gamma ^i_{n,s}\) is bounded from above and the length \({\textrm{L}}(\gamma ^i_{n,s})\) is bounded from below away from zero, independently of n. Recalling from Theorem 2.10 and Remark 2.12 that for positive times the flow is smooth and it evolves according to \(\partial _t\gamma ^i_{n,t} = \partial ^2_x\gamma ^i_{n,t}/|\partial _x\gamma ^i_{n,t}|^2\), we can apply the regularity estimates from [43, Proposition 5.10, Proposition 5.8] considering \(\Gamma _{n,s}\) as a new initial datum. This implies that there are \(s<T_1\le T\) and \(C_m>0\), for any \(m \in {\mathbb {N}}\), independent of n such that \(\Vert {\varvec{k}}^i_{n,t}\Vert _{H^m}(\textrm{d}s)\le C_m\) for any \(t \in [s,T_1]\).
Therefore the sequence of flows \(\Gamma _{n,t}\) converges smoothly on \([s,T_1]\times G\), up to reparametrizations, to the motion by curvature \({{\widehat{\Gamma }}}_{\infty ,t}\) parametrized by \({{\widehat{\gamma }}}^i_{\infty ,t}\), and \({{\widehat{\gamma }}}^i_{\infty ,t}\) is a reparametrization of \(\gamma ^i_*\). As the convergence holds in \(H^m\) for any \(m \in {\mathbb {N}}\), we find a contradiction with (5.2) at any \(t \in [s,T_1]\) for large n. \(\square \)
Theorem 1.35
Let \(\Gamma _*:G\rightarrow {\mathbb {R}}^2\) be a minimal network. Then there exists \(\delta _{\Gamma _*}>0\) such that the following holds.
Let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network such that \(\gamma ^i_*(p)=\gamma ^i(p)\) for any endpoint \(,p\in G\) and such that \(\Vert \gamma ^i_0-\gamma ^i_*\Vert _{H^2(\textrm{d}x)}\le \delta _{\Gamma _*}\). Then the motion by curvature \(\Gamma _t:G\rightarrow {\mathbb {R}}^2\) starting from \(\Gamma _0\) exists for all times and it smoothly converges to a minimal network \(\Gamma _\infty \) such that \({\textrm{L}}(\Gamma _\infty )={\textrm{L}}(\Gamma _*)\), up to reparametrization.
Proof
We recall the following interpolation inequalities. For any \(k \in {\mathbb {N}}\) with \(k\ge 1\) there exist \(\lambda _k>0,\zeta _k\in (0,1)\) such that
for any \(u \in H^{k+1}\left( (0,1);{\mathbb {R}}^N\right) \). We shall drop the subscript k when \(k=2\).
Let \(\sigma ,\theta ,r,C_{\textrm{LS}}\) be given by Theorem 3.13 and Corollary 3.14, where \(C_{\textrm{LS}}\) is the maximum of the constants given by both the statements.
Recalling Lemma 4.1, up to decrease \(r>0\), we can assume that the following hold. Whenever \({{\widehat{\gamma }}}^i:=\gamma ^i_*+ {\textsf{N}}^i\nu ^i_*+{\textsf{T}}^i\tau ^i_*\) is a smooth regular network, for \({\overline{{\textsf{N}}}}\in B_r(0)\subset V\) in the notation of Theorem 3.13, where the \({\textsf{T}}^i\)’s are adapted, then
-
(1)
there exists a constant \(C_G>2\), depending only on the graph G and \(\Gamma _*\), such that
$$\begin{aligned} \begin{aligned}&\langle {{\widehat{\nu }}}^i , \nu ^i_*\rangle \ge \frac{3}{4} , \qquad \qquad |\langle {{\widehat{\nu }}}^i , \tau ^i_*\rangle | < \frac{1}{C_G}, \\&\sum _{m \in J_G}\sum _{\ell \in I_m} \left| a^\ell (x) \langle {{\widehat{\nu }}}^\ell , \nu ^\ell _*\rangle + \langle {{\widehat{\nu }}}^\ell , \tau ^\ell _*\rangle \chi (x){\mathscr {L}}^\ell _m (a^{i_m}(x),a^{j_m}(x) ) \right| ^2 \ge \frac{2}{C_G}\sum _i |a^i(x)|^2 , \end{aligned} \end{aligned}$$where \({{\widehat{\nu }}}^i\) is the normal vector of \({{\widehat{\gamma }}}^i\), and \(i_m\) (resp. \(j_m\)) denotes the minimal (resp. intermediate) element of \(I_m\), for any continuous functions \(a^1,\ldots ,a^N\);
-
(2)
there exist \(c_1,c_2>0\) such that
$$\begin{aligned} c_1\le |\partial _x {{\widehat{\gamma }}}^i|^{-1} \le c_2, \end{aligned}$$for any i;
-
(3)
there is \(C_G'>2\) such that
-
if \(\Xi \) is a smooth regular network having the same endpoints of \(\Gamma _*\) defined by immersions \(\xi ^i\) such that \(\Vert \xi ^i-\gamma ^i_*\Vert _{H^2}< C_G' r\), then \({\textrm{L}}(\Xi )\ge {\textrm{L}}(\Gamma _*)\);
-
\(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{H^2}< \min \{(C_G'-1) r, \sigma /2 \}\).
-
We claim that whenever \({{\widehat{\gamma }}}^i_t=\gamma ^i_*+ {\textsf{N}}^i_t\nu ^i_*+{\textsf{T}}^i_t\tau ^i_*\) is a smooth solution to the motion by curvature, for \({\overline{{\textsf{N}}}}_t\in B_r(0)\subset V\) for any t, where we used the notation of Theorem 3.13 and the \({\textsf{T}}_t^i\)’s are adapted, then for any \(m \in {\mathbb {N}}\) with \(m\ge 3\) there exists \(C_m=C_m(r, \Gamma _*)>0\) such that
for any t. The claim easily follows by combining the fact that \({\overline{{\textsf{N}}}}_t\in B_r(0)\) ensures a uniform \(C^1\)-bound on the parametrizations with the fact that uniform upper bounds on the \(L^2(\textrm{d}s)\)-norm of the curvature along a motion by curvature imply uniform \(L^2(\textrm{d}s)\)-bounds on every derivative of the immersion. The proof of (5.4) is postponed to the end of the proof.
Taking into account Lemma 4.1 and Corollary 3.5, we can fix \(\eta >0\) such that:
-
(i)
if immersions \({{\widehat{\gamma }}}^i\) define a regular network \({{\widehat{\Gamma }}}\) with same endpoints of \(\Gamma _*\) such that \(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{C^0} \le 2\eta \), then \({\textrm{L}}(\Gamma _*)\le {\textrm{L}}({{\widehat{\Gamma }}})\);
-
(ii)
if \({{\widehat{\gamma }}}^i_t\) define a one-parameter family of immersions satisfying the assumptions of Corollary 3.5 and \(\sum _i \Vert {{\widehat{\gamma }}}^i_t - \gamma ^i_t \Vert _{C^5} \le \eta \) for any t around some \(t_0\), then the resulting \({\textsf{N}}^i_t\) verify \(\sum _i \Vert {\textsf{N}}^i_t\Vert _{H^4(\textrm{d}x)}< \tfrac{r}{2}\) for any t around \(t_0\);
-
(iii)
if immersions \({{\widehat{\gamma }}}^i\) define a network \({{\widehat{\Gamma }}}\) such that \(\Vert {{\widehat{\gamma }}}^i - \gamma ^i_*\Vert _{C^1} \le \eta \), then
$$\begin{aligned} \left| {\textrm{L}}({{\widehat{\Gamma }}}) - {\textrm{L}}(\Gamma _*) \right| ^\theta \le \frac{\theta r^{\frac{1}{\zeta }} }{C_{\textrm{LS}} \sqrt{c_2C_G} \left( 100\, \lambda C_3\right) ^{\frac{1}{\zeta }}}. \end{aligned}$$
With the above choices, we want to show that the statement follows by choosing
where \({\overline{\varepsilon }}\) is given by Lemma 5.1.
So let \(\Gamma _0\) be as in the statement. By Lemma 5.1, the flow \(\Gamma _t\) starting from \(\Gamma _0\) satisfies
for some \(\tau \in [0,T)\) and smooth reparametrizations \(\sigma ^i\). Then by i) we have \({\textrm{L}}(\Gamma _\tau )\ge {\textrm{L}}(\Gamma _*)\). Moreover, if \({\textrm{L}}(\Gamma _\tau )= {\textrm{L}}(\Gamma _*)\), then i) implies that \(\Gamma _\tau \) is a local minimizer for the length in \(C^0\), and thus it is minimal up to reparametrization, and the resulting flow is stationary. Hence we can assume that \({\textrm{L}}(\Gamma _\tau )> {\textrm{L}}(\Gamma _*)\).
Moreover, by Corollary 3.5 and ii) we get the existence of \({\textsf{N}}^i_t,{\textsf{T}}^i_t,\varphi ^i_t\) as in Corollary 3.5 such that
with
for any \(t \in [\tau ,\tau _1)\) with \(\tau _1>\tau \).
We define the nonincreasing function
for \(t \in [0,T)\).
Let us further define S the supremum of all \(s \in [\tau ,T)\) such that \(\gamma ^i_t\) can be written as in (5.6) for some reparametrizations \(\varphi _t\) and functions \({\textsf{N}}^i_t\) continuously differentiable in time with \(\sum _i \Vert {\textsf{N}}^i_t\Vert _{H^2(\textrm{d}x)} < r\) for any \(t \in [\tau ,s]\).
We have that \(S\ge \tau _1>\tau \). Moreover, we can assume that \({\textrm{L}}(\Gamma _s)>{\textrm{L}}(\Gamma _*)\) for any \(s \in [\tau ,S)\). Indeed, if instead \({\textrm{L}}(\Gamma _s)={\textrm{L}}(\Gamma _*)\) for some s, then \(\Gamma _s\) locally minimizes the length in \(H^2\): if immersions \({{\bar{\gamma }}}^i\) define a smooth regular network with \(\Vert {{\bar{\gamma }}}^i-{{\widetilde{\gamma }}}^i_s\Vert _{H^2}< r\), then \(\Vert {{\bar{\gamma }}}^i-\gamma ^i_*\Vert _{H^2} \le \Vert {{\bar{\gamma }}}^i-{{\widetilde{\gamma }}}^i_s\Vert _{H^2} + \Vert {{\widetilde{\gamma }}}^i_s-\gamma ^i_*\Vert _{H^2} < C_G'r\) by (3), and then \({\textrm{L}}(\Gamma _s)={\textrm{L}}(\Gamma _*) \le {\textrm{L}}({{\bar{\Gamma }}})\) by (3). Hence in this case \(\Gamma _s\) is minimal, up to reparametrization, and the resulting flow is stationary.
Therefore we can assume \(H(t)>0\) for \(t \in (\tau ,S)\), and then H is differentiable on \((\tau ,S)\). We now want to show that \(S=T=+\infty \).
We differentiate
for any \(t \in (\tau ,S)\), where we denoted \( \left\| (\partial _t\Gamma _t)^\perp \right\| _{L^2(\textrm{d}s)}^2 :=\sum _i \int |(\partial _t\gamma ^i_t)^\perp |^2 \,\mathrm ds = \sum _i \int |(\partial _t{{\widetilde{\gamma }}}^i_t)^\perp |^2 \,\mathrm ds\), where we could apply the Łojasiewicz–Simon inequality in Corollary 3.14 thanks to (3). From the above estimate we get
for any \(t \in (\tau ,S)\). Hence
for any \(s \in (\tau ,S)\). Recalling (5.5) and (iii), we conclude that
for any \(s \in (\tau ,S)\). Exploiting the interpolation inequality (5.3) with \(k=2\) we obtain
for any \(s \in (\tau ,S)\). Since \(\Vert {\overline{N}}_\tau \Vert _{H^2(\textrm{d}x)}<\tfrac{r}{2}\), a simple contradiction argument implies that \(S=T\) and \(\Vert {\overline{N}}_t\Vert _{H^2(\textrm{d}x)}<\tfrac{r}{2}+\tfrac{r}{50}\) for any \(t \in [\tau ,T)\). Hence Theorem 2.13 implies that \(T=+\infty \).
We claim that \(H(t)\searrow 0\) as \(t\rightarrow +\infty \). Indeed, since \(S=T=+\infty \), we now know that (5.4) holds for any time. Hence there exists a sequence of times \(t_n\rightarrow +\infty \) such that the parametrizations \({{\widetilde{\gamma }}}^i_{t_n}\) converge in \(C^2\) to limit parametrizations \({{\widetilde{\gamma }}}^i_\infty :=\gamma ^i_* + {\textsf{N}}^i_\infty \nu ^i_* + {\textsf{T}}^i_\infty \tau ^i_*\) with \({\overline{N}}_\infty \in B_r(0) \subset V\). Moreover, \({{\widetilde{\gamma }}}^i_\infty \) parametrize a minimal network \({{\widetilde{\Gamma }}}_\infty \). Hence using (3) and Corollary 3.14 the length of \({{\widetilde{\Gamma }}}_\infty \) has to be equal to the length of \(\Gamma _*\). As H is nonincreasing, then \(H(t)\searrow 0\) as \(t\rightarrow +\infty \).
Exploiting the fact that H(t) is infinitesimal as t diverges, estimating as in (5.8) for large times implies that the curve \({\overline{{\textsf{N}}}}_t\) is Cauchy in \(L^2(\textrm{d}x)\), and thus there exists its full limit \({\overline{{\textsf{N}}}}_\infty \) in \(L^2(\textrm{d}x)\) as \(t\rightarrow +\infty \). Interpolating using (5.3) and (5.4), we then conclude that convergence holds in \(H^m\) for any m.
We are now left to prove the claim (5.4). For the sake of clarity, we consider the case \(m=3\) only, the general case following by induction. We differentiate the curvature \(\widehat{{\varvec{k}}}^i_t\) of \({{\widehat{\gamma }}}^i_t\) and we multiply by the normal \({{\widehat{\nu }}}^i_t\) to get the identity
Taking absolute values and recalling (1), (2), we deduce that
where \(C(r,\Gamma _*)>0\) here is a constant that may change from line to line.
Recalling [43, Proposition 5.8], we know that along a motion by curvature the \(L^2(\textrm{d}s)\)-norms of derivatives of the curvature are bounded by the \(L^2(\textrm{d}s)\)-norms of the curvature and by the inverse of the length of the edges. Hence the assumption \({\textsf{N}}_t \in B_r(0)\subset V\) guarantees that \(\int |\partial _s \widehat{{\varvec{k}}}^i_t| \,\mathrm ds \le C(r,\Gamma _*)\). In particular \(\Vert {\textsf{N}}^i_t \Vert _{W^{3,1}(\textrm{d}x)} \le C(r,\Gamma _*)\), and thus \(\Vert {\textsf{N}}^i_t \Vert _{W^{2,\infty }(\textrm{d}x)}\le C(r,\Gamma _*)\). Therefore we can improve the estimate on \(\partial _x^3 {\textsf{N}}^i_t\) by first taking squares and then integrating in (5.9), which yields
thus proving the claim (5.4). \(\square \)
An immediate consequence is the next result, which promotes subconvergence of the motion by curvature to full convergence.
Theorem 1.36
Let \(\Gamma _t: G\rightarrow {\mathbb {R}}^2\) be a smooth motion by curvature defined on \([0,+\infty )\). Let \(\Gamma _\infty :G\rightarrow {\mathbb {R}}^2\) be a minimal network such that \(\Gamma _{t_n} \rightarrow \Gamma _\infty \) in \(H^2\) for some sequence \(t_n\nearrow +\infty \) as \(n\rightarrow +\infty \). Then \(\Gamma _{t} \rightarrow \Gamma _\infty \) smoothly as \(t\rightarrow +\infty \), up to reparametrization.
Proof
The statement immediately follows from Theorem 5.2. \(\square \)
We conclude this part by collecting some observations implied by the previous stability results.
Remark 5.4
Theorem 5.3 can be combined with [43, Proposition 13.5] in the following way. If \(\Gamma _t: G\rightarrow {\mathbb {R}}^2\) is a motion by curvature of a tree-like network, i.e., G has no cycles, defined on \([0,+\infty )\), if the sequential limit \(\Gamma _\infty \) along a sequence of times \(t_n\), which always exists by [43, Proposition 13.5], is regular, then \(\Gamma _\infty \) is the full limit of \(\Gamma _t\) as \(t\rightarrow +\infty \).
However, the example in the next section shows that in general the limit \(\Gamma _\infty \) may be degenerate.
Remark 5.5
If the network \(\Gamma _*\) in Theorem 5.2 is an isolated critical point of the length, then \(\Gamma _\infty \) coincides with \(\Gamma _*\). This is always the case if \(\Gamma _*\) is a tree, i.e., G has no cycles, since there exist finitely many minimal trees \({\widehat{\Gamma }}:G\rightarrow {\mathbb {R}}^2\) having the same endpoints of \(\Gamma _*\).
Remark 5.6
In the notation of Theorem 5.2, in some cases we are able to conclude that \(\Gamma _\infty \) coincides with \(\Gamma _*\), even if \(\Gamma _*\) is not an isolated critical point of the length.
Suppose that \(\Gamma _*\) is a minimal network composed of a regular hexagon H with area \(A_*\) and six straight segments connecting the vertices of a bigger regular hexagon. Then \(\Gamma _*\) is not an isolated critical point of the length, indeed there exists a one-parameter family of critical points with the same length: all networks composed of concentric hexagons and straight segments connecting the endpoints, see Fig. 3. It can be proved that there are no other minimal networks with this topology and with the same endpoints.
In the above notation, suppose now that \(\Gamma _0\) is regular network with the same endpoints and the same topology of \(\Gamma _*\), sufficiently close to \(\Gamma _*\) in \(H^2\), and such that the area enclosed by the loop equals \(A_*\). Then \(\Gamma _\infty \) coincides with \(\Gamma _*\). Indeed the area enclosed by a loop composed of six curves is preserved during the evolution (see [43, Section 8.2]) and \(\Gamma _*\) is the unique minimal network with area \(A_*\) among the one-parameter family of possible minimal networks.
6 Convergence to a degenerate network in infinite time
In this section we construct an example of a motion by curvature existing for all times, with uniformly bounded curvature, smoothly converging to a degenerate network. More precisely, there holds the following result.
Theorem 1.40
There exists a smooth regular network \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) such that the motion by curvature \(\Gamma _t\) starting from \(\Gamma _0\) exists for every time, the length of each curve \(\gamma ^i_t\) is strictly positive for any time, the curvature of each curve \(\gamma ^i_t\) is uniformly bounded from above, and \(\Gamma _t\) smoothly converges to a degenerate network \(\Gamma _\infty \) as \(t\rightarrow +\infty \), up to reparametrization. Specifically, the length of a distinguished curve \(\gamma ^0_t\) tends to zero as \(t\rightarrow +\infty \).
Proof
The proof of the statement follows by putting together the observations in Step 1, Step 2, and Step 3 below. \(\square \)
From now on and for the rest of this section, let \(\Gamma _0:G\rightarrow {\mathbb {R}}^2\) be a smooth regular network as in Fig. 1. We assume that \(\Gamma _0\) is composed of five curves, it is symmetric with respect to horizontal and vertical axes, the middle curve \(\gamma ^0\) is a segment, and the remaining four curves are convex, i.e., their oriented curvature has a sign. Moreover, the network has four endpoints located at the vertices of a rectangle of sides of length \(2/\sqrt{3}\) and 2, so that the diagonals of the rectangle meets forming angles of \(\tfrac{2}{3} \pi \) and \(\tfrac{\pi }{3}\), see Fig. 1.
We want to show that the motion by curvature \(\Gamma _t\) starting from such a datum \(\Gamma _0\) satisfies the statement of Theorem 6.1. The candidate limit is given by the degenerate network defined by the diagonals of the rectangle, that is, the dotted lines in Fig. 1.
By symmetry, it is sufficient to study the evolution of the middle curve and of the two bottom curves in Fig. 1. To fix the notation, we recall such part of the graph in Fig. 4. Observe that the straight middle curve \(\gamma ^0\) is parametrized from the bottom to the top, while the convex curves \(\gamma ^1, \gamma ^2\) have the endpoint 1 at the junction. This is in contrast with the usual choice we adopted of setting endpoints 1 at the endpoints of the network, however we choose this parametrization here in order to simplify useless presence of minus signs in the computations below. Finally, we denote by
the vertical unit vector, coinciding with the tangent vector of the curve \(\gamma ^0\).
Recalling Remark 2.12, we can assume that the motion by curvature is smooth and evolves by the special flow, i.e., \(\partial _t\gamma ^i_t= |\partial _x\gamma ^i_t|^{-2} \partial ^2_x \gamma ^i_t\) for any i. Decomposing \(\partial _t \gamma ^i_t\) in tangential and normal components, we denote
where we denote by \({\widetilde{k}}_i\) the oriented curvature of \(\gamma ^i_t\), i.e., \({\widetilde{k}}_i:=\langle {\varvec{k}}^i_t, \nu ^i_t\rangle \). We drop subscript t in \({\widetilde{k}}_i\) and \(\lambda _i\) for ease of notation.
At least for short times, by choice of the initial datum, we can consider the functions \(v_i\) defined by
for \(i=1,2\). We further assume that
We preliminarily observe that, by symmetry and choice of orientations, we have \({\widetilde{k}}_1=-{\widetilde{k}}_2\) and \(\partial _s {\widetilde{k}}_1 = - \partial _s {\widetilde{k}}_2\) at any time and point. Moreover, symmetry and evolution of curvature imply that \(\gamma ^0_t\) is a vertical segment for any time; then \(\partial _t\gamma ^0_t(t,0)\) and \(\omega \) are parallel, hence \(\lambda _1(t,1)= \langle \partial _t\gamma ^0_t(t,0), \tau ^1_t\rangle = \langle \partial _t\gamma ^0_t(t,0), \tau ^2_t\rangle = \lambda _2(t,1)\) for any \(t\in [0,T)\). On the other hand, the boundary condition obtained by the derivative \(\partial _t \langle \tau ^1_t(1),\tau ^2_t(1)\rangle =0\), see [43], reads
Therefore we get that
for \(i=1,2\) for any \(t \in [0,T)\). Finally, recalling from [43, Section 3] that tangential velocities at a junction can be expressed in terms of normal velocities, which easily follows from identity \(\partial _t \gamma ^1_t(1)=\partial _t \gamma ^2_t(1)\), we have that
-
Step 1
Letting \(T>0\) the maximal time of existence of the flow, we want to prove that the functions \(v_i\) are defined on [0, T) and
$$\begin{aligned} {\widetilde{k}}_1 \ge 0 ,\qquad \qquad 1\le v_1 \le \frac{2}{\sqrt{3}}, \end{aligned}$$(6.4)for any \(x \in [0,1]\) and \(t \in [0,T)\). In particular, the curves \(\gamma ^1_t, \gamma ^2_t\) can be parametrized by convex graphs on a fixed interval for any time.
By basic computations on the evolution of geometric quantities, see [17, 43], one easily obtains
$$\begin{aligned} (\partial _t - \partial ^2_s) v_1 = -v_1({\widetilde{k}}_1)^2 -2 \frac{(\partial _s v_1)^2}{v_1} + \lambda _1 \partial _s v_1. \end{aligned}$$(6.5)Recalling that
$$\begin{aligned} (\partial _t - \partial ^2_s) {\widetilde{k}}_1 = \lambda _1 \partial _s {\widetilde{k}}_1 + ({\widetilde{k}}_1)^3, \end{aligned}$$(6.6)we obtain
$$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s)(v_1 {\widetilde{k}}_1) = \left[ \lambda _1 -2v_1 {\widetilde{k}}_1 \langle \tau ^1_t, \omega \rangle \right] \partial _s (v_1 {\widetilde{k}}_1) . \end{aligned} \end{aligned}$$Exploiting (6.2) and (6.3), we see that \((v_1 {\widetilde{k}}_1)\) satisfies a Neumann boundary condition at \(x=1\), that is
$$\begin{aligned} \begin{aligned} \partial _s(v_1 {\widetilde{k}}_1) \big |_{x=1}&= v_1 \partial _s {\widetilde{k}}_1 + ({\widetilde{k}}_1)^2 (v_1)^2 \langle \tau ^1_t,\omega \rangle \, \big |_{x=1}\\&= -\frac{2}{\sqrt{3}}\lambda _1 {\widetilde{k}}_1 + ({\widetilde{k}}_1)^2 \left( \frac{2}{\sqrt{3}}\right) ^2 \frac{1}{2} \, \bigg |_{x=1} \\&= -\frac{2}{3} ({\widetilde{k}}_1)^2 + \frac{2}{3} ({\widetilde{k}}_1)^2 \, \bigg |_{x=1} = 0. \end{aligned} \end{aligned}$$Let \({\overline{T}}\le T\) be the maximal time such that \(v_1\) is well defined. For \(\varepsilon ,\delta >0\), we consider the function \(f:=v_1 {\widetilde{k}}_1 + \varepsilon t + \delta \). By the above observations and since \({\widetilde{k}}_1(t,0)=0\), then f satisfies
$$\begin{aligned} {\left\{ \begin{array}{ll} (\partial _t - \partial ^2_s) f = \left[ \lambda _1 -2v_1 {\widetilde{k}}_1 \langle \tau ^1_t, \omega \rangle \right] \partial _s f + \varepsilon &{} \text {on } [0,{\overline{T}}) \times [0,1], \\ f(0,x) \ge \delta &{} \forall \, x \in [0,1],\\ f(t,0) \ge \delta &{} \forall \, t \in [0,{\overline{T}}), \\ \partial _s f (t,1) = 0 &{} \forall \, t \in [0,{\overline{T}}). \end{array}\right. } \end{aligned}$$By a standard argument involving the maximum principle, we can prove that \(f>0\) at any \((t,x) \in [0,{\overline{T}})\times [0,1]\). More precisely, if \({\overline{t}}>0\) is the first time such that there is \({\overline{x}}\) such that \(f({\overline{t}}, {\overline{x}}) =0\), then \({\overline{x}} \in (0,1]\). The case \({\overline{x}}=1\) is excluded as Hopf Lemma (see [53, Theorem 6, p. 174]) would imply \(\partial _s f({\overline{t}},1) <0\). Also the case \({\overline{x}}\in (0,1)\) leads to contradiction, as in this case \(0 \ge (\partial _t - \partial ^2_s) f ({\overline{t}},{\overline{x}}) \ge \varepsilon >0 \).
Arbitrariness of \(\varepsilon ,\delta \) implies that \( v_1 {\widetilde{k}}_1\ge 0\) on \([0,{\overline{T}})\times [0,1]\). Since by continuity \(v_1\) must be strictly positive on \([0,{\overline{T}})\times [0,1]\), then \({\widetilde{k}}_1\ge 0\) on \([0,{\overline{T}})\times [0,1]\). Since convexity is preserved up to time \({\overline{T}}\) and recalling assumption (6.1), then
$$\begin{aligned} \begin{aligned} \partial _t \langle \nu ^1_t,\omega \rangle |_{x=0}&= -\partial _s {\widetilde{k}}_1 \langle \tau ^1_t,\omega \rangle |_{x=0}\le 0, \\ \partial _s\langle \nu ^1_t,\omega \rangle&= \langle - {\widetilde{k}}_1 \tau ^1_t, \omega \rangle \le 0, \end{aligned} \end{aligned}$$where we used that \(\partial _s{\widetilde{k}}_1|_{x=0}\ge 0\) since \({\widetilde{k}}_1(0)=0\) is a global minimum for \({\widetilde{k}}_1\). Therefore the minimum of \(\langle \nu ^1_t,\omega \rangle \) is achieved at \(x=1\), that is \(\tfrac{\sqrt{3}}{2}= \langle \nu ^1_t,\omega \rangle |_{x=1} \le \langle \nu ^1_t,\omega \rangle \le 1\). The positive lower bound on \( \langle \nu ^1_t,\omega \rangle \) implies that \({\overline{T}}=T\) and completes the proof of the first step.
-
Step 2
We claim that there exists a constant \(C>0\) such that \({\widetilde{k}}_1 \le C\) for any \(t\in [0,T)\). Moreover, for any \(k\ge 1\) there is \(C_k>0\) such that \(\partial _s^k{\widetilde{k}}_1 \le C_k\) for any \(t\in [0,T)\).
By the evolution equations for \(v_1\) and \({\widetilde{k}}_1\), we can compute
$$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s) \left( (v_1)^2({\widetilde{k}}_1)^2\right)&= 2 \Big ( \tfrac{1}{2}\lambda _1 \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \\&\quad -(v_1)^2 (\partial _s {\widetilde{k}}_1)^2 - 3(\partial _s v_1)^2 ({\widetilde{k}}_1)^2 - \partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2) \Big ). \end{aligned} \end{aligned}$$(6.7)By Young inequality we estimate
$$\begin{aligned} \begin{aligned}&-2\partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2)\\&\quad = - \partial _s( {\widetilde{k}}_1^2) \,\partial _s (v_1^2) - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad = - \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) \partial _s(v_1^2) \, v_1^{-2} + ( {\widetilde{k}}_1)^2 v_1^{-2} \big (\partial _s(v_1^2) \big )^2 - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad = -2 v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 4 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 - 4 v_1 {\widetilde{k}}_1 (\partial _s v_1) (\partial _s {\widetilde{k}}_1) \\&\quad \le -2 v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 4 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 + 2(v_1)^2 (\partial _s {\widetilde{k}}_1)^2 + 2({\widetilde{k}}_1)^2 (\partial _s v_1)^2 \\&\quad = 2 \Big ( - v_1^{-1} \,\partial _s v_1 \, \partial _s \Big (v_1^2 {\widetilde{k}}_1^2 \Big ) + 3 ( {\widetilde{k}}_1)^2 \big (\partial _s v_1 \big )^2 + (v_1)^2 (\partial _s {\widetilde{k}}_1)^2 \Big ). \end{aligned} \end{aligned}$$Inserting in (6.7) we get
$$\begin{aligned} \begin{aligned} (\partial _t - \partial ^2_s) \left( (v_1)^2({\widetilde{k}}_1)^2\right)&\le 2 \Big ( \tfrac{1}{2}\lambda _1 \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) -v_1^{-1} \,\partial _s v_1 \, \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \Big ) \\&= \left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) . \end{aligned} \end{aligned}$$(6.8)Observe that \(v_1=v_2\) by symmetry, hence all the above considerations hold for \(v_2\) as well. We further consider
$$\begin{aligned} g_i:=({\widetilde{k}}_i)^2 (v_i)^2, \end{aligned}$$for \(i=1,2\). Again, actually \(g_1=g_2\) by symmetry. Observe that
$$\begin{aligned} g_1(t,0)=\partial _s g_1(t,0) = 0, \end{aligned}$$(6.9)as \({\widetilde{k}}_1(t,0)=0\), for any \(t \in [0,T)\). Moreover
$$\begin{aligned} \begin{aligned} \partial _s g_1(t,1)&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (v_1)^2 + ({\widetilde{k}}_1)^2 v_1 (\partial _s v_1) \Big ) \, \Big |_{(t,1)} \\&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (v_1)^2 + ({\widetilde{k}}_1)^2 v_1 (v_1)^2 {\widetilde{k}}_1 \langle \tau ^1_t,\omega \rangle \Big ) \, \Big |_{(t,1)} \\&= 2 \Big ( {\widetilde{k}}_1 (\partial _s {\widetilde{k}}_1) (2/\sqrt{3})^2 + ({\widetilde{k}}_1)^3 (2/\sqrt{3})^3 (1/2) \Big ) \, \Big |_{(t,1)} \\&= \frac{8}{3}{\widetilde{k}}_1 \Big ( (\partial _s {\widetilde{k}}_1) + ({\widetilde{k}}_1)^2 /\sqrt{3} \Big ) \, \Big |_{(t,1)} =0, \end{aligned} \end{aligned}$$(6.10)where the last equality follows from (6.2) and (6.3). Obviously, \(\partial _s g_2(t,1) =0\) as well.
Now take \(t_0 \in (0,T)\) and let \(p_0\in {\mathbb {R}}^2\) be the mid point of the image of the straight edge \(\gamma ^0\). Without loss of generality we can assume that \(p_0=0\) is the origin of \({\mathbb {R}}^2\). Hence let
$$\begin{aligned} \rho (t,p) :=\frac{1}{\sqrt{4\pi (t_0-t)}} \exp \left( -\frac{|p|^2}{4(t_0-t)} \right) . \end{aligned}$$Denoting \(\rho \circ \gamma ^i_t:=\rho (t,\gamma ^i_t)\), we observe that
$$\begin{aligned} \begin{aligned} -\partial _s(\rho \circ \gamma ^1_t) \,\big |_{(t,1)}&= -\left\langle \nabla \rho |_{(t,\gamma ^1_t(1))} , \tau ^1_t(1)\right\rangle = \frac{\rho \circ \gamma ^1_t}{2(t_0-t)} \langle \gamma ^1_t(1), \tau ^1_t(1)\rangle \le 0, \end{aligned} \end{aligned}$$(6.11)for any \(t \in (0,t_0)\), where the inequality follows by the choice of the origin of \({\mathbb {R}}^2\).
Now let \(A:=\max _{[0,1]} ({\widetilde{k}}_1)^2 (v_1)^2\, \big |_{t=0}>0\) and define
$$\begin{aligned} f_i(t,x) :=\left( \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \right) ^2. \end{aligned}$$Since \(F(y):=\left( \max \left\{ y -A, 0 \right\} \right) ^2\) is of class \(C^{1,1}\), then \(f_i(t,\cdot ) \in H^2\) for any t and chain rule holds almost everywhere, i.e., \(\partial _s f_i =2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _s(({\widetilde{k}}_i)^2 (v_i)^2)\) and \(\partial _s^2 f_i =2\left[ \partial _s(({\widetilde{k}}_i)^2 (v_i)^2) \right] ^2 + 2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _s^2(({\widetilde{k}}_i)^2 (v_i)^2)\) almost everywhere. Analogously, \(f_i\) is differentiable with respect to t at any (t, x) and \(\partial _t f_i = 2 \max \left\{ ({\widetilde{k}}_i)^2 (v_i)^2 -A, 0 \right\} \partial _t(({\widetilde{k}}_i)^2 (v_i)^2)\) is continuous on \([0,T)\times [0,1]\).
Recalling (6.8) and using Young inequality we estimate
$$\begin{aligned} \begin{aligned} (\partial _t - \partial _s^2) f_1&=2 \max \left\{ ({\widetilde{k}}_1)^2 (v_1)^2 -A, 0 \right\} (\partial _t - \partial _s^2) \big ( ({\widetilde{k}}_1)^2 (v_1)^2\big )\\&\quad -2\left[ \partial _s(({\widetilde{k}}_1)^2 (v_1)^2) \right] ^2 \\&\le 2 \max \left\{ ({\widetilde{k}}_1)^2 (v_1)^2 -A, 0 \right\} \left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] \partial _s \left( (v_1)^2({\widetilde{k}}_1)^2\right) \\&\quad -2\left[ \partial _s(({\widetilde{k}}_1)^2 (v_1)^2) \right] ^2\\&\le \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2 f_1, \end{aligned} \end{aligned}$$(6.12)for any t and almost every x. We apply the monotonicity-type formula from Lemma A.2 with \(f=f_1\) to get
$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t} \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds\le & {} \int _0^1 ( \rho \circ \gamma ^1_t)(\partial _t-\partial _s^2)f_1 \\{} & {} + \int _0^1 \left( \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\{} & {} +\big ( ( \rho \circ \gamma ^1_t)\partial _sf_1 - f_1 \partial _s ( \rho \circ \gamma ^1_t)\big )\bigg |_0^1, \end{aligned}$$for any \(t\in (0,t_0)\). Employing (6.9), (6.10), (6.11), and (6.12), we obtain
$$\begin{aligned} \begin{aligned}&\frac{\textrm{d}}{\textrm{d}t} \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \le \int _0^1 \left( \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2+ \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \quad - f_1(t,1) \partial _s(\rho \circ \gamma ^1_t)\big |_{(t,1)} \\&\quad \overset{(6.11)}{\le } \int _0^1 \left( \frac{1}{2}\left[ \lambda _1 -2 v_1^{-1} \partial _s v_1 \right] ^2+ \partial _s \lambda _1 -\frac{\lambda _1}{2(t_0-t)}\langle \gamma ^1_t,\tau ^1_t\rangle \right) ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \\&\quad \le C(t_0) \int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds, \end{aligned} \end{aligned}$$(6.13)where \(C(t_0)>0\) is some constant depending on the flow and on the choice of \(t_0\). Since \(\int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds \big |_{t=0} =0\) by definition of A, the differential inequality in (6.13) implies that \(\int _0^1 ( \rho \circ \gamma ^1_t) \, f_1 \,\mathrm ds =0\) for any \(t \in [0,t_0)\). This means that \(f_1(t,x)=0\) for any x and \(t \in [0,t_0)\). By arbitrariness of \(t_0\), we get that
$$\begin{aligned} ({\widetilde{k}}_i)^2 (v_i)^2(t,x) \le \max _{[0,1]} ({\widetilde{k}}_1)^2 (v_1)^2\, \big |_{t=0}, \end{aligned}$$for any x and \(t \in [0,T)\), \(i=1,2\). Taking into account (6.4), the claimed uniform upper bound on \({\widetilde{k}}_i\) follows. The second part of the claim in Step 2 follows by adapting the above reasoning on derivatives \(\partial _s^k{\widetilde{k}}_i\) in place of \({\widetilde{k}}_i\) or, more easily, by observing that estimates on derivatives \(\partial _s^k{\widetilde{k}}_i\) are independent of the length of \(\gamma ^0_t\). Indeed, by locality and uniqueness of the flow, the evolution of \(\gamma ^1_t, \gamma ^2_t\) does not change if \(\gamma ^1_t, \gamma ^2_t\) are considered to be edges of a completely analogous network considered in Fig. 1 except that the length of \(\gamma ^0_0\) is taken arbitrarily large (see also the discussion in Remark 1.4). In such a case the upper bound previously proved on the curvature together with lower bounds away from zero on the length of each edge imply uniform bounds on the derivatives \(\partial _s^k{\widetilde{k}}_i\) (independently of \({\textrm{L}}(\gamma ^0_t)\)) by classical results like [43, Proposition 5.8].
-
Step 3
We want to show that the length of each curve is strictly positive for any time, \(T=+\infty \), the length of \(\gamma ^0_t\) converges to 0 as \(t\rightarrow +\infty \), and the curves \(\gamma ^1_t, \gamma ^2_t\) smoothly converge to (half of) the diagonals of the rectangle having vertices at the endpoints of the network, up to reparametrization.
By Step 1, we can parametrize \(\gamma ^1_t\) as the graph of a function \(u:[0,T)\times [0,1]\rightarrow {\mathbb {R}}\), as in Fig. 5.
Parametrizing as a graph as in Fig. 5 the evolution of an edge \(E^i\), whose parametrization evolves according to \(\partial \gamma ^i_t = \partial ^2_x\gamma ^i/|\partial _x\gamma ^i|^2\), the function u solves the problem
$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u = \frac{\partial ^2_xu}{1+(\partial _x u)^2} &{} \text {for } (t,x) \in [0,T)\times [0,1], \\ u(t,0)=0 , \\ \partial _x u(t,1) = \tan (\pi /6) = 1/\sqrt{3}, \\ u(0,x)=u_0(x). \end{array}\right. } \end{aligned}$$By the above steps, \(\partial ^2_xu \ge 0\) and \(0\le \partial _x u \le \partial _x u(t,1)= 1/ \sqrt{3}\), for any \(t \in [0,T)\).
We compare the evolution of u with upper and lower barriers given by solutions of heat-type equations. More precisely, as \(\partial ^2_xu\ge 0\) by convexity, we have that
$$\begin{aligned} \frac{3}{4}\partial ^2_xu \le \partial _t u \le \partial ^2_xu, \end{aligned}$$at any time and point. Hence we define v, w the solutions to the problems
$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t v = \frac{3}{4}\partial ^2_xv &{} \text {on } [0,+\infty )\times [0,1], \\ v(t,0)=0 , \\ \partial _x v(t,1) = 1/\sqrt{3}, \\ v(0,x)=u_0(x). \end{array}\right. }\\ \qquad {\left\{ \begin{array}{ll} \partial _t w= \partial ^2_xw &{} \text {on } [0,+\infty )\times [0,1], \\ w(t,0)=0 , \\ \partial _x w(t,1) = 1/\sqrt{3}, \\ w(0,x)=u_0(x). \end{array}\right. } \end{aligned}$$It is well known that v, w exist for any time and converge to the function \(u_\infty (x) :=x/\sqrt{3}\) with an exponential rate in infinite time.
We can consider the function
$$\begin{aligned} z(t,x):={\left\{ \begin{array}{ll} u(t,x)-w(t,x) &{}\quad x \in [0,1], \\ u(t,2-x)-w(t,2-x) &{}\quad x \in (1,2], \end{array}\right. } \end{aligned}$$which is the even reflection of the function \(u-w\) about the point \(x=1\). Hence z is of class \(C^2\) and solves
$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t z \le \partial ^2_x u - \partial ^2_x w = \partial ^2_x z &{}\quad \text {on } [0,+\infty )\times [0,2], \\ z(t,0)=z(t,2)=0 , \\ z(0,x)=0 &{}\quad \forall \, x \in [0,2]. \end{array}\right. } \end{aligned}$$By the maximum principle, see [41, Theorem 2.1.1, Lemma 2.1.3], we get that \(z\le 0\) at any time and point, that is \(u(t,x)\le w(t,x)\).
By analogous comparison with v, we deduce that \(v(t,x)\le u(t,x)\le w(t,x)\). Therefore the length of \(\gamma ^0_t\) is strictly positive for any \(t \in [0,T)\), which, together with Step 2 and Theorem 2.13, implies \(T=+\infty \). Moreover, the above comparison analysis completes the proof of Step 3.
Data availability
The manuscript has no associated data.
References
Baldi, P., Haus, E., Mantegazza, C.: Non-existence of \(theta\)-shaped self-similarly shrinking networks moving by curvature. Commun. Partial Differ. Equ. 43(3), 403–427 (2018)
Bardsley, P., Barmak, K., Eggeling, E., Epshteyn, Y., Kinderlehrer, D., Ta’asan, S.: Towards a gradient flow for microstructure. Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 28(4), 777–805 (2017)
Brakke, K.A.: The Motion of a Surface by its Mean Curvature. Princeton University Press, Princeton (1978)
Bronsard, L., Reitich, F.: On three-phase boundary motion and the singular limit of a vector-valued Ginzburg–Landau equation. Arch. Rational Mech. Anal. 124(4), 355–379 (1993)
Burago, D., Burago, Y., Ivanov, S.: A course in metric geometry. Graduate Studies in Mathematics, vol. 33. American Mathematical Society, Providence (2001)
Carlotto, A., Chodosh, O., Rubinstein, Y.: Slowly converging Yamabe flows. Geom. Topol. 19(3), 1523–1568 (2015)
Chang, J.-E.: Stability of regular shrinkers in the network flow (2021). arXiv:2107.04338
Chang, J.-E., Lue, Y.-K.: Uniqueness of regular shrinkers with two enclosed regions. Geom. Dedicata 216(1), 17 (2022)
Chill, R.: On the Łojasiewicz–Simon gradient inequality. J. Funct. Anal. 201(2), 572–601 (2003)
Chill, R., Fašangová, E., Schätzle, R.: Willmore blowups are never compact. Duke Math. J. 147(2), 345–376 (2009)
Chodosh, O., Schulze, F.: Uniqueness of asymptotically conical tangent flows. Duke Math. J. 170(16), 3601–3657 (2021)
Colding, T.H., Minicozzi, W.P., II.: Uniqueness of blowups and Łojasiewicz inequalities. Ann. Math. 182(1), 221–285 (2015)
Dall’Acqua, A., Pozzi, P., Spener, A.: The Łojasiewicz–Simon gradient inequality for open elastic curves. J. Differ. Equ. 261(3), 2168–2209 (2016)
Deimling, K.: Nonlinear Functional Analysis. Springer, Berlin. XIV, 450 p. DM 98.00 (1985)
Denk, R., Saal, J., Seiler, J.: Inhomogeneous symbols, the Newton polygon, and maximal \(L^p\)-regularity. Russ. J. Math. Phys. 15(2), 171–191 (2008)
do Carmo, M.P.: Riemannian Geometry. Mathematics: Theory and Applications. Birkhäuser Boston, Inc., Boston (1992). Translated from the second Portuguese edition by Francis Flaherty
Dziuk, G., Kuwert, E., Schätzle, R.: Evolution of elastic curves in \({\mathbb{R}}^n\): existence and computation. SIAM J. Math. Anal. 33(5), 1228–1245 (2002)
Ecker, K., Huisken, G.: Mean curvature evolution of entire graphs. Ann. Math. (2) 130(3), 453–471 (1989)
Epshteyn, Y., Liu, C., Mizuno, M.: Large time asymptotic behavior of grain boundaries motion with dynamic lattice misorientations and with triple junctions drag. Commun. Math. Sci. 19(5), 1403–1428 (2021)
Epshteyn, Y., Liu, C., Mizuno, M.: Motion of grain boundaries with dynamic lattice misorientations and with triple junctions drag. SIAM J. Math. Anal. 53(3), 3072–3097 (2021)
Esedoglu, S., Otto, F.: Threshold dynamics for networks with arbitrary surface tensions. Commun. Pure Appl. Math. 68(5), 808–864 (2015)
Feehan, P.M.N.: Global existence and convergence of solutions to gradient systems and applications to Yang–Mills gradient flow (2016). arXiv:1409.1525
Fischer, J., Hensel, S., Laux, T., Simon, T.: The local structure of the energy landscape in multiphase mean curvature flow: weak–strong uniqueness and stability of evolutions (2020). arXiv:2003.05478
Fischer, J., Hensel, S., Laux, T., Simon, T.: Local minimizers of the interface length functional based on a concept of local paired calibrations (2023). arXiv:2212.11840
Garcke, H., Gößwein, M.: Non-linear stability of double bubbles under surface diffusion. J. Differ. Equ. 302, 617–661 (2021)
Gößwein, M., Menzel, J., Pluda, A., Existence and uniqueness of the motion by curvature of regular networks. Interfaces Free Bound. 25(1), 109–154 (2023)
Hensel, S., Laux, T., Weak-strong uniqueness for the mean curvature flow of double bubbles. Interfaces Free Bound. 25(1), 37–107 (2023)
Hörmander, L.: The analysis of linear partial differential operators. III. Classics in Mathematics. Springer, Berlin (2007). Differential operators, Reprint of the 1994 edition
Huisken, G.: Asymptotic behavior for singularities of the mean curvature flow. J. Differ. Geom. 31(1), 285–299 (1990)
Ilmanen, T., Neves, A., Schulze, F.: On short time existence for the planar network flow. J. Differ. Geom. 111(1), 39–89 (2019)
Kagaya, T., Mizuno, M., Takasao, K.: Long time behavior for a curvature flow of networks related to grain boundary motion with the effect of lattice misorientations. Annali della Scuola Normale Superiore di Pisa, Classe di Scienze (2021). arXiv:2112.11069 (to appear)
Kim, L., Tonegawa, Y.: On the mean curvature flow of grain boundaries. Ann. Inst. Fourier (Grenoble) 67(1), 43–142 (2017)
Kim, L., Tonegawa, Y.: Existence and regularity theorems of one-dimensional Brakke flows. Interfaces Free Bound. 22(4), 505–550 (2020)
Kinderlehrer, D., Liu, C.: Evolution of grain boundaries. Math. Models Methods Appl. Sci. 11(4), 713–729 (2001)
Laux, T., Otto, F.: Convergence of the thresholding scheme for multi-phase mean-curvature flow. Calc. Var. Partial Differ. Equ. 55(5), Art. 129, 74 (2016)
Lee, J.M.: Introduction to Riemannian manifolds, Graduate Texts in Mathematics, vol. 176. Springer, Cham (2018). Second edition of [MR1468735]
Lira, J., Mazzeo, R., Pluda, A., Saez, M.: Short-time existence for the network flow. Commun. Pure Appl. Math. (2021). arXiv:2101.04302(to appear)
Łojasiewicz, S.: Une propriété topologique des sous–ensembles analytiques réels. In Les Équations aux Dérivées Partielles (Paris, 1962), pp. 87–89. Éditions du Centre National de la Recherche Scientifique, Paris (1963)
Łojasiewicz, S.: Sur les trajectoires du gradient d’une fonction analytique. Seminari di Geometria (1982/83), Università degli Studi di Bologna, pp. 115–117 (1984)
Magni, A., Mantegazza, C., Novaga, M.: Motion by curvature of planar networks, II. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 15, 117–144 (2016)
Mantegazza, C.: Lecture Notes on Mean Curvature Flow, vol. 290. Birkhäuser, Basel (2011)
Mantegazza, C., Novaga, M., Pluda, A.: Type-0 singularities in the network flow—evolution of trees. J. Reine Angew. Math. 792, 189–221 (2022)
Mantegazza, C., Novaga, M., Pluda, A., Schulze, F.: Evolution of networks with multiple junctions. Astérisque (2016). arXiv:1611.08254 (to appear)
Mantegazza, C., Novaga, M., Tortorelli, V.M.: Motion by curvature of planar networks. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 3(2), 235–324 (2004)
Mantegazza, C., Pozzetta, M.: The Łojasiewicz–Simon inequality for the elastic flow. Calc. Var. 60(56) (2021)
Mantegazza, C., Pozzetta, M.: Asymptotic convergence of evolving hypersurfaces. Rev. Mat. Iberoam. 38(6), 1927–1944 (2022)
Martelli, B., Novaga, M., Pluda, A., Riolo, S.: Spines of minimal length. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 17(3), 1067–1090 (2017)
Morgan, F.: Clusters with multiplicities in \(\mathbb{R}^2\). Pac. J. Math. 221(1), 123–146 (2005)
Mullins, W.M.: Two-dimensional motion of idealized grain boundaries. J. Appl. Phys. 27, 900–904 (1956)
Paolini, E.: Minimal connections: the classical Steiner problem and generalizations. In: “Bruno Pini” Mathematical Analysis Seminar, University of Bologna, Department of Mathematics: Academic Year 2012. Papers from the seminar held in Bologna, Italy, 2012, pp. 72–87. Univ. Bologna, Department of Mathematics, Bologna (2012)
Pluda, A., Pozzetta, M.: Minimizing properties of networks via global and local calibrations. Bull. London Math. Soc. (2023) arXiv:2206.11034 (to appear)
Pozzetta, M.: Convergence of elastic flows of curves into manifolds. Nonlinear Anal. 214, 112581 (2022)
Protter, M.H., Weinberger, H.F.: Maximum principles in differential equations. Corr. reprint. Springer, New York. X, 261 p. DM 79.00 (1984)
Rupp, F.: On the Lojasiewicz–Simon gradient inequality on submanifolds. J. Funct. Anal. 279(8), 1–32 (2020)
Rupp, F.: The Willmore flow with prescribed isoperimetric ratio (2021). arXiv:2106.02579
Rupp, F.: The volume-preserving Willmore flow. Nonlinear Anal. 230, 113220 (2023)
Schulze, F.: Uniqueness of compact tangent flows in Mean Curvature Flow. J. Reine Angew. Math. 690, 163–172 (2014)
Simon, L.: Asymptotics for a class of nonlinear evolution equations, with applications to geometric problems. Ann. Math. (2) 118(3), 525–571 (1983)
Stuvard, S., Tonegawa, Y.: On the existence of canonical multi-phase Brakke Flows. Adv. Calc. Var. (2021). arXiv:2109.14415(to appear)
White, B.: Stationary polyhedral varifolds minimize area (2020). arXiv:1912.00257
Acknowledgements
The authors are partially supported by the INdAM - GNAMPA Project 2022 CUP _ E55F22000270001 “Isoperimetric problems: variational and geometric aspects”. The authors are grateful to Matteo Novaga for fruitful discussions on the topic of this work.
Funding
Open access funding provided by Università degli Studi di Napoli Federico II within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest.
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Tools needed in some proofs
1.1 Quantitative implicit function theorem
For the convenience of the reader, we sketch the proof of a quantitative implicit function theorem. Specifically, a lower bound on the width of the domain of the implicit function f is given in terms of bounds on the norms of the derivatives of the starting map F. The proof is a simplified finite-dimensional version of the general [14, Theorem 15.8]. An analogous argument can be found in unpublished lecture notes by C. Liverani.
Theorem A.1
Let \(n,m\in {\mathbb {N}}\), \(n,m\ge 1\), and \((x_0,y_0)\in {\mathbb {R}}^n\times {\mathbb {R}}^m\). Denote \(Q^n_r:=\{x \in {\mathbb {R}}^n \ :\ |x-x_0|<r \}\) and \(Q^m_r:=\{y \in {\mathbb {R}}^m \ :\ |y-y_0|<r \}\), for any \(r>0\).
Let \(F:U\rightarrow {\mathbb {R}}^m\) be a \(C^1\) function, where \(U\subset {\mathbb {R}}^n\times {\mathbb {R}}^m\) is a neighborhood of \((x_0,y_0)\), and assume that \(F(x_0,y_0)=0\). Suppose that
-
\(\partial _yF(x_0,y_0)\) is invertible, and let \(S:=\Vert [\partial _yF(x_0,y_0)]^{-1}\Vert \);
-
\(\rho >0\) is such that \(\Vert \textrm{id} - [\partial _yF(x_0,y_0)]^{-1}\partial _yF(x,y)\Vert \le \tfrac{1}{2}\) for \((x,y) \in Q^n_\rho \times Q^m_\rho \) and \({\overline{Q}}^n_\rho \times {\overline{Q}}^m_\rho \Subset U\).
Hence, denoting \(N:=\sup \{ \Vert \partial _x F(x,y)\Vert \ :\ (x,y) \in Q^n_\rho \times Q^m_\rho \}\), there exists \(r=r(\rho ,S,N) \in (0,\rho ]\) such that there exists a unique function \(f:Q^n_r\rightarrow Q^m_\rho \) such that \(f(x_0)=y_0\) and \(F(x,y)=0\) for \(x \in Q^n_r\) if and only if \(y=f(x)\).
Proof
We just prove that the radius r for the domain \(Q^n_r\) of the implicit function f can be chosen depending only on \(\rho ,S,N\).
Without loss of generality, let \((x_0,y_0)=(0,0)\). Let \(r=\min \{\rho /(2SN),\rho /2\}\). For any \(x\in {\overline{Q}}^n_r\) consider the function \(\phi _x:{\overline{Q}}^m_\rho \rightarrow {\mathbb {R}}^m\) given by \(\phi _x(y):=y -[\partial _yF(x_0,y_0)]^{-1} F(x,y) \). We observe that
for any \(y \in Q^m_\rho \). Hence
for any \(y \in {\overline{Q}}^m_\rho \). Therefore \(\phi _x:{\overline{Q}}^m_\rho \rightarrow {\overline{Q}}^m_\rho \) is a contraction, and thus there exists a unique \(y \in {\overline{Q}}^m_\rho \) such that \(\phi _x(y)=y\), that is, \(F(x,y)=0\). Defining \(f(x)=y\) the unique y such that \(\phi _x(y)=y\), we see that f is defined on \(Q^n_r\), and the claim follows. \(\square \)
1.2 A monotonicity-type formula
We derive here an evolution formula in the spirit of the celebrated Huisken Monotonicity Formula [29, Theorem 3.1], see also [18, Section 1] and [41, Theorem 3.1.5, Exercise 3.1.6].
Let \(\rho :[0,T)\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}\) be defined by
which satisfies
In particular \(\partial _t \rho = - \Delta \rho - \frac{\rho }{2(t_0-t)}\).
Moreover consider a smooth evolution of an immersed curve \(\gamma :[0,T)\times [0,1]\rightarrow {\mathbb {R}}^2\) by
Lemma A.2
Let \(\rho \) and \(\gamma \) be as in (A.1) and (A.2), for some \(p_0 \in {\mathbb {R}}^2\) and \(t_0>0\). Let \(f:[0,T)\times [0,1]\rightarrow {\mathbb {R}}\) be a function such that \(f(t,\cdot ) \in H^2(0,1)\) for any t and differentiable with respect to t with \(\partial _t f\) continuous, with \(T\ge t_0\). Denoting \(\rho \circ \gamma :=\rho (t,\gamma (t,x))\), then
for any \(t\in [0,t_0)\).
Proof
If \(p =\gamma _t(x)\) then
where \((\cdot )^\perp \) denotes projection along \(\nu _t(x)\). Since \(\partial _t \rho = - \Delta \rho - \frac{\rho }{2(t_0-t)}\), recalling the relation between Euclidean and intrinsic Laplacian on a submanifold [41, Lemma 3.1.2], we have
Recalling that \(\partial _t(\textrm{d}s) = (\partial _s\lambda - |{\varvec{k}}|^2) \,\mathrm ds\), the desired formula follows by directly taking the derivative with respect to t and integrating by parts twice in order to transfer the derivatives with respect to s from \(\rho \) to f. \(\square \)
Remark A.3
Lemma A.2 is a generalization of [18, Equation (7), page 455] to the case of a curve with boundary and evolving with a tangential velocity \(\lambda \) different from zero. We refer also to [44, Lemma 6.3] for the case in which \(f\equiv 1\).
Minimal networks on surfaces
In this section we discuss to what extent the theory developed in this work can be adapted to the case of networks in surfaces.
We consider 2-dimensional complete Riemannian manifolds without boundary, denoted by \((\Sigma , g)\). The obvious adaptation of definitions given in Sect. 2.1 allows to speak of networks \(\Gamma :G\rightarrow \Sigma \), as well as of motion by curvature (for short time existence see for instance [37, Section 8.4]). A minimal network \(\Gamma _*:G\rightarrow \Sigma \) is a collection of geodesic arcs in \(\Sigma \) meeting at triple junctions forming equal angles.
We expect that minor technical modifications of our arguments lead to the validity of a Łojasiewicz–Simon inequality as in Theorem 1.1 for minimal networks on any analytic surface \((\Sigma , g)\). If such a minimal network is also a local minimizer for the length functional with respect to perturbations sufficiently small in \(H^2\) which do not move endpoints, then the stability result as in Theorem 1.2 holds. In particular, relevant examples of Riemannian surfaces where also stability can be proved are given by simply connected analytic surfaces with non-positive sectional curvature, such as the 2-dimensional hyperbolic space or complete minimal immersions of \({\mathbb {R}}^2\) in \({\mathbb {R}}^n\).
Let us now describe the main modifications that one should carry out to deduce the previous claims.
Concerning preliminary results, suitably adapting the arguments from [26], a short time existence theorem for the flow as Theorem 2.10 can be proved. Moreover, characterization of singularities as described in Theorem 2.13 can be deduced analogously.
As discussed in the work, a graph parametrization like the one established in Sect. 3.1 is necessary in order to apply the recent abstract theory that implies a Łojasiewicz–Simon inequality.
The results of Sect. 3.1 can be directly adapted to networks on \(\Sigma \) by employing the exponential map \(\exp \) on \((\Sigma , g)\). In fact, let \(\Gamma _*, \Gamma :G\rightarrow \Sigma \) be a minimal network and a network, respectively, let \(m:=\pi (e^i,i)=\pi (e^j,j)=\pi (e^k,k)\) be a junction and denote \(m_*:=\Gamma _*(m)\). Assuming that the distance between \(\Gamma (m)\) and \(m_*\) is less than the injectivity radius \(\textrm{inj}(m_*)\) of \(\Sigma \) at \(m_*\), the image \(\gamma ^\ell (e^\ell )\), for \(\ell \in \{i,j,k\}\), can be written as
for some \({\textsf{N}}^\ell (e^\ell ), {\textsf{T}}^\ell (e^\ell )\). Hence applying the inverse \(\exp _{m_*}^{-1}\) on equalities
for \(\ell \ne s\), \(\ell ,s \in \{i,j,k\}\), readily implies the same linear identities obtained in Lemma 3.1. Hence, conversely, inverting such linear relations as in Lemma 3.2, one proves the direct analog of Lemma 3.2. This implies that the linear operators of Definition 3.3 can also be used in the setting of networks on \(\Sigma \).
Finally, assuming that parametrizations of \(\Gamma \) are sufficiently close in \(H^2\) to parametrizations of \(\Gamma _*\), meaning that \(\sum _i \Vert \exp _{\gamma ^i_*(\cdot )}^{-1}(\gamma ^i(\cdot ))\Vert _{H^2}\) is bounded above by a constant also depending on the injectivity radius of \(\Gamma _*(G)\), a version of Proposition 3.4 holds for networks on \(\Sigma \). More precisely, curves \(\gamma ^i\) can be written as
up to reparametrization, where \({\textsf{T}}^i\)’s are adapted to the \({\textsf{N}}^i\)’s. In order to perform the proof of Proposition 3.4 on \(\Sigma \), it suffices to adapt the argument in neighborhoods of junctions: this can be carried out by passing into a local chart.
Variations of parametrizations of \(\Gamma _*\) analogous to the ones in Proposition 3.7 and in Proposition 3.8 take the forms
respectively. Carrying out computations for first and second variations, see [16, Chapter 9] and [36, Theorem 10.22, Proposition 10.24], one obtains the same formulae given in Proposition 3.7 and in Proposition 3.8, except that now the second variation formula (3.29) also contains the additive term
where K(p) is the sectional curvature of \(\Sigma \) at p. However, the linear operator
is compact. Therefore the second variation operator for the length functional differs by a compact operator from the one considered in Sect. 3.3. Since Fredholmness is stable under compact perturbations, i.e., a linear operator T between Banach spaces is Fredholm of index l if and only if \(T+T'\) is Fredholm of index l, for any compact operator \(T'\) (see [28, Section 19.1]), the Fredholmness property required by Proposition 3.12 follows.
Assuming that \((\Sigma , g)\) is analytic, all the functional analytic properties required on first and second variations by Proposition 3.12 can be derived as done in Sect. 3.3. Observe that analyticity of the metric g is required as the exponential map shall appear in the expression for the first variation (compare, e.g., with [52, proposition 3.20]). Eventually, we deduce that a Łojasiewicz–Simon inequality as in Theorem 1.1 holds for any for minimal network on an analytic surface \((\Sigma , g)\).
Concerning the stability of minimal networks on surfaces, we cannot expect that a version of Theorem 1.2 always holds. Indeed, differently from the case of \({\mathbb {R}}^2\) (Lemma 4.1), a minimal network on a surface does not necessarily minimize the length among small perturbations, as this already happens for geodesics.
However, assuming that that a minimal network \(\Gamma _*:G\rightarrow \Sigma \) locally minimizes the length with respect to perturbations having \(H^2\)-norm sufficiently small and that do not move endpoints, then arguments in the proof of Theorem 5.2 can be adapted (see, e.g., [52, Theorem 4.5]) to deduce the desired stability. From the technical viewpoint, observe that local minimality with respect to small perturbations is manifestly needed in the argument so that, in the notation of Theorem 5.2, the difference \(({\textrm{L}}(\Gamma _t)-{\textrm{L}}(\Gamma _*))\) is non-negative, and thus \(H(t):=({\textrm{L}}(\Gamma _t)-{\textrm{L}}(\Gamma _*))^\theta \) is well defined.
To conclude, we observe that minimal networks minimize the length among perturbations having \(C^0\)-norm sufficiently small and which do not move endpoints on simply connected surfaces with non-positive sectional curvature. Indeed, this minimizing property is proved in [47, Theorem 3.7] for surfaces with constant non-positive sectional curvature and it is based on a contradiction argument in combination with the fact that \(\delta (t):=d(\gamma (t),\sigma (t))\) is convex if d is the geodesic distance on \((\Sigma ,g)\) and \(\gamma ,\sigma \) are minimizing geodesics. The very same argument can be generalized to simply connected surfaces with non-positive sectional curvature taking into account that every geodesic on such a surface is minimizing [5, Theorem 9.2.2] and that convexity for a function \(\delta (t)\) as before holds in this generality as well [5, Lemma 9.2.3].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pluda, A., Pozzetta, M. Łojasiewicz–Simon inequalities for minimal networks: stability and convergence. Math. Ann. (2023). https://doi.org/10.1007/s00208-023-02714-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00208-023-02714-7