1 Introduction

In this paper, we consider the geometric evolution of a curve with a partially free boundary. To be more precise, we consider the gradient flow of the elastic energy under the constraint that the boundary points of the curve have to remain attached to the x-axis.

This paper fits within the broad range of topics on the geometric evolution of curves and surfaces, where the evolution law is dictated by functions of curvature. These topics have recently gained increasing attention from the mathematical community due to their applications to various physical problems and the fascinating challenges they present in analysis and geometry.

The elastic energy of a curve is a linear combination of the \(L^2\)-norm of its curvature \(\varvec{\kappa }\) (also known as one-dimensional Willmore functional) and its weighted length, namely

$$\begin{aligned} \mathcal {E}(\gamma ) =\int _{\gamma }\vert \varvec{\kappa }\vert ^2 +\mu {\,\textrm{d} }s \end{aligned}$$

where \(\mu >0\).

Before passing to the evolutionary problem, we say a few words about the critical points of the energy \(\mathcal {E}\), known as elasticae or elastic curves. As explained in [48], elasticae have been studied since the time of Bernoulli and Euler, who used elastic energy as a model for the bending energy of elastic rods. Still later, Born, in his Thesis of 1906, plotted the first figures of elasticae, using numerical schemes. However, in the last decades, many authors contribute to their classification, for instance, we refer to Langer and Singer [26, 27], Linnér [30], Djondjorov et al. [18] and Langer and Singer [28], Bevilacqua, Lussardi and Marzocchi [9], the same authors with Ballarin [8], for the case of a functional which depends both on the curvature and the torsion of the curve. More recently, Miura and Yoshizawa in a series of papers [36,37,38], give a complete classification of both clamped and pinned p-elasticae.

In this paper, we aim to study the \(L^2\)-gradient flow of \(\mathcal {E}\). To the best of our knowledge, the problem was introduced by Polden in his PhD Thesis [43], where it is shown that given as initial datum a smooth immersion of the circle in the plane, then there exists a smooth solution to the gradient flow problem for all positive times which sub-converges to an elastica. Then, Dziuk, Kuwert and Schätzle generalized the global existence and sub-convergence result to \({{{\mathbb {R}}}}^n\) and derived an algorithm to treat the flow and compute several numerical examples. Later, the evolution of elastic curves has been extended and studied in detail both for closed curves (see for instance [19, 33, 43, 44]) as well as for open curves with Navier boundary conditions in [39, 40] and clamped boundary conditions in [16, 29, 40, 47]. We also recall that a slightly different problem was tackled, among others, by Wen in [49] and by Rupp and Spener in [45], where the authors analyzed the elastic flow of curves with a nonzero rotation index and clamped boundary conditions respectively, which are in both cases subject to fix length and in [25, 41, 42] where a variety of constraints are considered. For the sake of completeness, we also mention that the \(L^2\)-gradient flow of \(\int _{\gamma }\vert \varvec{\kappa }\vert ^2 {\,\textrm{d} }s \) for curve subjected to fix length is studied in [12, 13, 19], indeed other fourth (or higher) order flows are analyzed, for instance, in [1, 2, 15, 34, 35, 50, 51]. Finally, we mention the survey [32] for a complete review of the literature and we recommend all the references therein.

As already said, in this paper, we let evolve a curve supposing that it remains attached to the x-axis. To derive the flow, we start by writing the associated Euler–Lagrange equations and in particular we find suitable “natural” boundary conditions for this problem (these boundary conditions are known in the literature as Navier conditions). We thus get that the evolution can be described by solutions of a system of quasilinear fourth order with boundary conditions in (2.7), namely the attachment condition, second and third order conditions. We then introduce a class of admissible initial curves of class \(C^{4+\alpha }\) with \(\alpha \in (0,1)\) which needs to be non-degenerate, in the sense that the y-component of the unit tangent vector must be positive at boundary points, and satisfy (in addition to the conditions mentioned above) an extra fourth order condition (see Definition 2.2).

Then, we establish well-posedness of the flow. More precisely, starting with a (geometrically) admissible initial curve we prove in Theorem 3.14 that there exists a unique (up to reparametrization) solution to the flow in a small time interval [0, T] with \(T>0\), that can be described by a parametrization of class \(C^{{\frac{4+\alpha }{4}},4} ([0, T ] \times [0, 1])\).

To do so, we choose a specific tangential velocity turning the system (2.9) into a non-degenerate parabolic boundary value problem without changing the geometric nature of the evolution (namely the analytic problem (3.2)). Then, we solve the analytic problem using a linearization procedure and a fixed point argument. The main difficulty is actually to solve the associated linear system (3.5), coupled with extra compatibility conditions (see Definition 3.4), employing the Solonnikov theory for linear parabolic systems in Hölder space introduced in [46], as it is shown in Theorem 3.5.

Once we have a solution for the analytic problem, the key point is to ensure that solving (3.2) is enough to obtain a unique solution to the original geometric problem. This is shown in Theorem 3.14, following the approach presented in [20] and later in [21].

The second natural step is trying to understand the long-time behavior of the evolving curves. This leads to our main result.

Theorem 1.1

Let \(\gamma _0\) be a geometrically admissible initial curve and \(\gamma _t\) be a solution to the elastic flow with initial datum \(\gamma _0\) in the maximal time interval [0, T) with \(T \in (0, \infty ) \cup \{\infty \}\). Then, up to reparametrization and translation of \(\gamma _t\), it follows

$$\begin{aligned} T= \infty \end{aligned}$$

or at least one of the following holds

  • the inferior limit of the length of \(\gamma _t\) is zero as \( t \rightarrow T\);

  • the inferior limit of the y-component of the unit tangent vector at the boundary is zero as \( t \rightarrow T\).

Even though the structure of the proof of this result is based on a contradiction argument already present in the literature (see for instance [14, 19, 22, 32, 43]) this is the most technical part of the paper and it contains relevant novelties.

We find energy type inequalities, more precisely bounds on the \(L^2\)-norm of the second and sixth derivative of the curvature, which leads to contradicting the finiteness of T. Those estimates, which involved the smallest number of derivatives, can be derived under the assumption that during the evolution the length is uniformly bounded away from zero and that the curve remains non-degenerate in a uniform sense (see Definition 4.4).

Moreover, we underline that only estimates for geometric quantities, namely the curvature, are needed. In particular, the proof itself is independent of the choice of tangential velocity which corresponds to the very definition of the flow, where only the normal velocity is prescribed. For this reason, following [14], we reparametrize the flow in such a way that the tangential velocity linearly interpolates its values at boundary points (see condition (4.13)) and such that suitable estimates both inside and at boundary points hold. With this choice and the uniform bounds for the curvature, we can extend the flow smoothly up to the time T given by the short-time existence result and then restart the flow, contradicting the maximality of T.

In short, our approach combines the one presented in [14] and the other in [22], in the sense that we choose a tangential velocity as explained above and we use the minimum number of derivatives (and hence of estimates) which are needed to conclude the proof of Theorem 1.1.

This work is organized as follows: in the next section we formulate the geometric evolution problem for elastic curves and we show that those curves decrease the energy \(\mathcal {E}\). In Section 3 we show short-time existence of a unique smooth solution using the Solonnikov theory and a contraction argument. We also show geometric uniqueness. In the final Section 5, we prove the long-time existence result using the curvature bounds provided in Section 4.

2 The elastic flow

2.1 Preliminary definitions and notation

A regular curve is a continuous map \(\gamma : [a,b] \rightarrow {{{\mathbb {R}}}}^2\) which is differentiable on (ab) and such that \(\vert \partial _x\gamma \vert \) never vanishes on (ab). Without loss of generality, from now on we consider \([a,b]=[0,1]\).

We denote by s the arclength parameter, then \(\partial _s:=\frac{1}{\vert \partial _x\gamma \vert }\partial _x\) and \({\,\textrm{d} }s:= \vert \partial _x\gamma \vert {\,\textrm{d} }x\) are the derivative and the measure with respect to the arclength parameter of the curve \(\gamma \), respectively.

From now on, we will pass to the arclength parametrization of the curves without further comments.

If we assume that \(\gamma \) is a regular planar curve of class at least \(C^1\), we can define the unit tangent vector \(\tau =\vert \partial _x\gamma \vert ^{-1} \partial _x\gamma \) and the unit normal vector \(\nu \) as the anticlockwise rotation by \(\pi /2\) of the unit tangent vector.

We introduce the operator \(\partial _s^\perp \) that acts on vector fields \(\varphi \) defined as the normal component of \(\partial _s\varphi \) along the curve \(\gamma \), that is \(\partial _s^\perp \varphi =\partial _s\varphi -\left\langle \partial _s\varphi ,\partial _s\gamma \right\rangle \partial _s\gamma \). Moreover, for any vector \(\psi (\cdot ) \in {{{\mathbb {R}}}}^2\), we use the notation (\(\psi (\cdot )_1\),\(\psi (\cdot )_2)\) to denote the projection on the x-axis and y-axis, respectively.

Let \(\mu >0\). Assuming that \(\gamma \) is of class \(H^2\), we denote by \(\varvec{\kappa } = \partial _s\tau \) the curvature vector and we define the elastic energy with a length penalization

$$\begin{aligned} \mathcal {E}(\gamma ) =\int _{\gamma }\vert \varvec{\kappa }\vert ^2 +\mu {\,\textrm{d} }s \,. \end{aligned}$$

Denoting by k the oriented curvature, by means of relation \(\varvec{\kappa }= k \nu \) which holds in \({{{\mathbb {R}}}}^2\), the energy functional can be equivalently written as

$$\begin{aligned} \mathcal {E}(\gamma )=\int _{\gamma }k^2+\mu {\,\textrm{d} }s \,. \end{aligned}$$
(2.1)

2.2 Formal derivation of the flow

Let \(\gamma :[0,1]\rightarrow {\mathbb {R}}^2\) be a regular curve of class \(H^2\). We consider a variation \(\gamma _\varepsilon =\gamma +\varepsilon \psi \) with \(\varepsilon \in {\mathbb {R}}\) and \(\psi :[0,1]\rightarrow {\mathbb {R}}^2\) of class \(H^2\), which is regular whenever \(\vert \varepsilon \vert \) is small enough. By direct computations (see [33], for instance) we get the first variation of \(\mathcal {E}\), that is

$$\begin{aligned} \frac{d}{d\varepsilon }\mathcal {E}(\gamma _\varepsilon )\Big \vert _{\varepsilon =0}&=\int _{\gamma } 2 \langle \varvec{\kappa }, \partial ^2_{s} \psi \rangle {\,\textrm{d} }s + \int _{\gamma } (-3 |\varvec{\kappa }|^2+\mu ) \left\langle \tau ,\partial _s\psi \right\rangle {\,\textrm{d} }s\,. \end{aligned}$$
(2.2)

We say that a regular curve \(\gamma \) of class \(H^2\) is a critical point of \(\mathcal {E}\) if for any \(\psi \) its first variation vanishes.

Lemma 2.1

(Euler–Lagrange equations) Let \(\gamma :[0,1]\rightarrow {\mathbb {R}}^2\) be a critical point of \(\mathcal {E}\) parametrized proportional to arclength. Then, \(\gamma \) is smooth and satisfies

$$\begin{aligned} 2(\partial ^\perp _s)^2 \varvec{\kappa }+ |\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }\, =0 \end{aligned}$$

in (0, 1). Moreover, if the endpoints are constrained to the x-axis, the following Navier boundary conditions are fulfilled

$$\begin{aligned} {\left\{ \begin{array}{ll} k(y)=0 &{}\text {curvature or second order conditions}\\ \left( -2\partial _sk(y) \nu (y) +\mu \tau (y)\right) _1 =0 &{}\text {third order conditions} \end{array}\right. } \end{aligned}$$

for \(y\in \{0,1\}\).

Proof

By a standard bootstrap argument, one can show that critical points of \(\mathcal {E}\) are actually smooth (for the reader’s convenience a proof of this fact is given in Proposition 5.3 in the appendix). Hence, integrating by parts the expression (2.2), we have

$$\begin{aligned} \frac{d}{d\varepsilon }\mathcal {E}(\gamma _\varepsilon ) \Big \vert _{\varepsilon =0} =&\int _{\gamma } \left\langle 2 (\partial ^\perp _s)^2 \varvec{\kappa }+ |\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }, \psi \right\rangle {\,\textrm{d} }s\nonumber \\&+ 2 \left. \langle \varvec{\kappa }, \partial _s \psi \rangle \right| _0^1 + \left. \langle -2\partial ^\perp _s \varvec{\kappa } - |\varvec{\kappa }|^2\tau +\mu \tau , \psi \rangle \right| _0^1 \, . \end{aligned}$$
(2.3)

Since \(\gamma \) is critical, from formula (2.3) we immediately get

$$\begin{aligned} 2(\partial ^\perp _s)^2 \varvec{\kappa }+ |\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }\, =0 \end{aligned}$$

and

$$\begin{aligned} 2 \left. \langle \varvec{\kappa }, \partial _s \psi \rangle \right| _0^1 + \left. \langle -2\partial ^\perp _s \varvec{\kappa } - |\varvec{\kappa }|^2\tau +\mu \tau , \psi \rangle \right| _0^1=0 \,. \end{aligned}$$
(2.4)

We now recall that

$$\begin{aligned} \partial _s^\perp \varvec{\kappa }=\partial _s\varvec{\kappa }+ \vert \varvec{\kappa } \vert ^2 \tau \,. \end{aligned}$$

Hence, from \(\varvec{\kappa }= k \nu \) and the Serret-Frenet equation in the plane, that is

$$\begin{aligned} \partial _s\nu = - k \tau , \end{aligned}$$
(2.5)

the boundary terms in (2.4) reduce to

$$\begin{aligned} 2 \left. \langle k\nu , \partial _s \psi \rangle \right| _0^1 + \left\langle -2\partial _s k \nu - k^2 \tau + \mu \tau , \psi \rangle \right| _0^1. \end{aligned}$$

The fact that the endpoints must remain attached to the x-axis affects the class of test functions: we can only consider variations \(\gamma _\varepsilon =\gamma +\varepsilon \psi \) with

$$\begin{aligned} \psi (0)_2=\psi (1)_2=0. \end{aligned}$$

Now, letting first \(\psi (0)_1 =\psi (1)_1= 0\), it remains the boundary term

$$\begin{aligned} 2 \left. \langle k\nu , \partial _s \psi \rangle \right| _0^1=0\,, \end{aligned}$$

where the test functions \(\psi \) appear differentiated. So, we can choose a test function \(\psi \) such that

$$\begin{aligned} \partial _s \psi (0) = \nu (0) \qquad \text {and} \qquad \partial _s \psi (1)=0 \end{aligned}$$

and we get \(k(0)=0\). Then, interchanging the role of \(\partial _s \psi (0)\) and \(\partial _s \psi (1)\), we have \(k(1)=0\).

It remains to consider the last term

$$\begin{aligned} \left\langle -2\partial _s k \nu - k^2 \tau + \mu \tau , \psi \rangle \right| _0^1=0\,. \end{aligned}$$

Taking into account the condition \(k(0)=k(1)=0\), by arbitrariness of \(\psi \) the term is zero if

$$\begin{aligned} \left( -2\partial _sk(y) \nu (y) +\mu \tau (y)\right) _1 =0 \end{aligned}$$

for \(y \in \{0,1\}\). \(\square \)

The previous lemma allows us to formally define the elastic flow of a curve with endpoints constrained to the x-axis coupling the motion equation

$$\begin{aligned} \partial _t\gamma =-2(\partial ^\perp _s)^2 \varvec{\kappa }- |\varvec{\kappa }|^2 \varvec{\kappa } +\mu \varvec{\kappa } , \end{aligned}$$
(2.6)

with the following Navier boundary conditions

$$\begin{aligned} {\left\{ \begin{array}{ll} \gamma (y)_2=0 \quad &{}\text {attachment conditions}\\ k(y)=0 &{}\text {curvature or second order conditions} \\ \left( -2\partial _sk(y) \nu (y) +\mu \tau (y)\right) _1 =0 &{}\text {third order conditions} \end{array}\right. } \end{aligned}$$
(2.7)

for \(y \in \{0,1\}\).

2.3 Definition of the geometric problem

In this section, we briefly introduce the parabolic Hölder spaces (see [46] for more details).

Given a function \(u:[0,T]\times [0,1]\rightarrow {\mathbb {R}}\), for \(\rho \in (0,1)\) we define the semi-norms

$$\begin{aligned}{}[ u]_{\rho ,0}:=\sup _{(t,x), (\tau ,x)}\frac{\vert u(t,x)-u(\tau ,x)\vert }{\vert t-\tau \vert ^\rho }, \end{aligned}$$

and

$$\begin{aligned}{}[ u]_{0,\rho }:=\sup _{(t,x), (t,y)}\frac{\vert u(t,x)-u(t,y)\vert }{\vert x-y\vert ^\rho }. \end{aligned}$$

Then, for \(l\in \{0,1,2,3,4\}\) and \(\alpha \in (0,1)\), the parabolic Hölder space

$$\begin{aligned} C^{\frac{l+\alpha }{4}, l+\alpha }([0,T]\times [0,1]) \end{aligned}$$

is the space of all functions \(u:[0,T]\times [0,1]\rightarrow {\mathbb {R}}\) that have continuous derivatives \(\partial _t ^i\partial _x^ju\) where \(i,j\in {\mathbb {N}}\) are such that \(4i+j\le l\) for which the norm

$$\begin{aligned} \left\Vert u\right\Vert _{{\frac{l+\alpha }{4},l+\alpha }}:= & {} \sum _{4i+j=0}^l\left\Vert \partial _t ^i\partial _x^ju\right\Vert _\infty +\sum _{4i+j=l}\left[ \partial _t ^i\partial _x^ju\right] _{0,\alpha }\\{} & {} + \sum _{0<l+\alpha -4i-j<4}\left[ \partial _t ^i\partial _x^ju\right] _{\frac{l+\alpha -4i-j}{4},0} \end{aligned}$$

is finite. Moreover, the space \(C^{\frac{\alpha }{4},\alpha }\left( [0,T]\times [0,1]\right) \) coincides with the space

$$\begin{aligned} C^{\frac{\alpha }{4}}\left( [0,T];C^0([0,1])\right) \cap C^0\left( [0,T];C^\alpha ([0,1])\right) , \end{aligned}$$

with equivalent norms.

Definition 2.2

(Admissible initial curve) A regular curve \(\gamma _0:[0,1] \rightarrow {{{\mathbb {R}}}}^2\) is an admissible initial curve for the elastic flow if

  1. 1.

    it admits a parametrization which belongs to \(C^{4+\alpha }([0,1], {{{\mathbb {R}}}}^2)\) for some \(\alpha \in (0,1)\);

  2. 2.

    it satisfies the Navier boundary conditions in (2.7): attachment, curvature and third order conditions;

  3. 3.

    it satisfies the non-degeneracy condition, that is, there exists \(\rho > 0\) such that

    $$\begin{aligned} (\tau _0(y))_2 \ge \rho \qquad \text {for}\,\, y \in \{0,1\} \,. \end{aligned}$$
    (2.8)
  4. 4.

    it satisfies the following fourth order condition

    $$\begin{aligned} ((-2\partial _s^2k_0(y)-k_0^3(y)+k_0(y))\nu _0(y))_2=0 \qquad \text {for}\,\, y \in \{0,1\} \,. \end{aligned}$$

Definition 2.3

(Solution of the geometric problem) Let \(\gamma _0\) be an admissible initial curve as in Definition 2.2 and \(T>0\). A time-dependent family of curves \(\gamma _t\) for \({t\in [0,T]}\) is a solution to the elastic flow with initial datum \(\gamma _0\) in the maximal time interval [0, T], if there exists a parametrization

$$\begin{aligned} \gamma (t,x)\in C^{\frac{4+\alpha }{4}, 4+\alpha }\left( [0,T]\times [0,1],{\mathbb {R}}^2\right) , \end{aligned}$$

with \(\gamma \) regular and such that for every \(t\in [0,T], x\in [0,1]\) the system

$$\begin{aligned} {\left\{ \begin{array}{ll} (\partial _t\gamma )^\perp =\left( -2\partial _s^2k-k^3+ \mu k\right) \nu \\ \gamma (0,x)=\gamma _0(x), \end{array}\right. } \end{aligned}$$
(2.9)

coupled with boundary conditions (2.7), is satisfied.

Remark 2.4

The motion equation in (2.9) follows from (2.6), using Serret-Frenet equation (2.5) and recalling that

$$\begin{aligned} (\partial _s^\perp )^2 \varvec{\kappa }= \partial _s^2 \varvec{\kappa }+3 \left\langle \partial _s\varvec{\kappa }, \varvec{\kappa } \right\rangle \tau + \vert \varvec{\kappa } \vert ^2 \varvec{\kappa } \,. \end{aligned}$$

Remark 2.5

Observe that the formulation of the problem given so far involves purely geometric quantities and hence it is invariant under reparametrizations. Thus, given a solution \(\gamma \) of (2.9), any reparametrization of \(\gamma \) still satisfies system (2.9).

Remark 2.6

As the authors pointed out in [32], in system (2.9) only the normal component of the velocity is prescribed. This does not mean that the tangential velocity is necessarily zero. Indeed, we can equivalently write the motion equations as

$$\begin{aligned} \partial _t\gamma =V\nu +\Lambda \tau , \end{aligned}$$
(2.10)

where \(V=-2\partial _ s^2 k-k^3+\mu k\) and \(\Lambda \) is some at least continuous function.

2.4 Energy monotonicity

In Proposition 2.8 we show that the energy of an evolving curve decreases in time, adapting the proof of [32, Proposition 2.20].

Lemma 2.7

If \(\gamma \) satisfies (2.10), the commutation rule

$$\begin{aligned} \partial _{t}\partial _{s}=\partial _{s}\partial _{t}+\left( kV-\partial _s \Lambda \right) \partial _{s}\, \end{aligned}$$

holds and the measure \({\,\textrm{d} }s\) evolves as

$$\begin{aligned} \partial _t({\,\textrm{d} }s)=\left( \partial _s \Lambda -kV\right) {\,\textrm{d} }s. \end{aligned}$$
(2.11)

Moreover the unit tangent vector, unit normal vector, and the j-th derivatives of scalar curvature of \(\gamma \) satisfy

$$\begin{aligned} \partial _{t}\tau =&\left( \partial _s V+\Lambda k\right) \nu \,,\nonumber \\ \partial _{t}\nu =&-\left( \partial _s V+\Lambda k\right) \tau \,, \end{aligned}$$
(2.12)
$$\begin{aligned} \partial _tk=&\left\langle \partial _{t}\varvec{\kappa },\nu \right\rangle =\partial _s^2 V+\Lambda \partial _s k+k^{2}V\,\nonumber \\ =&-2\partial _{s}^{4}k-5k^{2}\partial _{s}^{2}k-6k\left( \partial _{s}k\right) ^{2} +\Lambda \partial _{s}k-k^{5}+\mu \left( \partial _{s}^{2}k+k^3\right) \,, \end{aligned}$$
(2.13)

Proof

The proof of the lemma is obtained by direct computations, we refer for instance to [32, Lemma 2.19]. \(\square \)

Proposition 2.8

Let \(\gamma _t\) be a solution to the elastic flow in the sense of Definition 2.3. Then

$$\begin{aligned} \partial _{t}\mathcal {E}(\gamma _t)=&-\int _{\gamma } V^2{\,\textrm{d} }s\,. \end{aligned}$$

Proof

Using the evolution laws collected in Lemma 2.7, we get

$$\begin{aligned} \partial _{t}\int _{\gamma }k^{2}+\mu {\,\textrm{d} }s =&\int _{\gamma }2k\partial _{t}k+\left( k^{2}+\mu \right) \left( \partial _s \Lambda -kV\right) {\,\textrm{d} }s\\ =&\int _{\gamma }2k\left( \partial _s^2V+ \partial _s k \Lambda +k^{2}V\right) +\left( k^{2}+\mu \right) \left( \partial _s \Lambda -kV\right) {\,\textrm{d} }s\\ =&\int _{\gamma }^{}2k\partial _s^2V+k^3V-\mu kV+\partial _s\left( \Lambda \left( k^2+\mu \right) \right) {\,\textrm{d} }s\,. \end{aligned}$$

Integrating twice by parts the term \(\int _{\gamma } 2k\partial _s^2V {\,\textrm{d} }s \) we obtain

$$\begin{aligned} \partial _{t}\int _{\gamma }k^{2}+\mu {\,\textrm{d} }s =-\int _{\gamma }^{}V^2{\,\textrm{d} }s+ \left( 2k\partial _s V-2\partial _s kV+\Lambda (k^2+\mu ) \right) \Big |_{0}^1. \end{aligned}$$
(2.14)

It remains to show that the contribution of the boundary term in (2.14) is zero, once we assume that Navier boundary conditions hold.

Since \(k(y)=0\) for \(y\in \{0,1\}\), we only need to show that

$$\begin{aligned} -2\partial _s kV+ \mu \Lambda \Big |_{0}^1 = 0 \,. \end{aligned}$$

From \(\gamma (y)= (\gamma _1 (y),0)\), using relation (2.7) we obtain

$$\begin{aligned} 0 =&\langle \partial _t \gamma (y) , -2 \partial _s k(y) \nu (y)+ \mu \tau (y) \rangle \\=&\langle V(y) \nu (y) + \Lambda (y) \tau (y) , -2 \partial _s k(y) \nu (y)+ \mu \tau (y) \rangle \\ =&-2 \partial _s k(y) V(y) + \mu \Lambda (y) \, , \end{aligned}$$

where \(y \in \{ 0,1 \}\). \(\square \)

3 Short-time existence

In this section we show that, fixed an admissible initial curve, there exists a maximal existence time T. To do so, we find a unique solution to the associated analytic problem defined in (3.2) using a standard linearization procedure. More precisely, we use Solonnikov theory (see [46]) to prove the well-posedness of the linearized system and then we conclude with a fixed point argument. Then, a key point is to ensure that solving the analytic problem is enough to obtain a solution to the geometric problem (2.9) and that the solution of (2.9) is unique up to reparametrization.

3.1 Definition of the analytic problem

Let \(T>0\) and \(\alpha \in (0,1)\). Let us consider a time-dependent family of curves parametrized by a map \(\gamma \in C^{\frac{4+\alpha }{4},4+\alpha }([0,T]\times [0,1])\).

We compute the normal velocity of such moving curves in terms of the parametrization (see [21] for more details), that is

$$\begin{aligned} (\partial _t\gamma )^\perp =&-2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} +12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} +5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}\nonumber \\&-35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}}\nonumber \\&+\left\langle 2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}}-12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}-5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}}-8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}\right. \nonumber \\&\left. +35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}},\tau \right\rangle \tau +\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}} -\left\langle \mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}}, \tau \right\rangle \tau \,. \end{aligned}$$

We now aim to use a well-known technique, which was introduced for the first time by DeTurck in [17] for the Ricci flow and then has been employed in a large variety of situations (see for instance [14, 22, 32]).

More precisely, we choose as tangential velocity the function

$$\begin{aligned} \widetilde{\Lambda }&:= \left\langle -2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}}+12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}+5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}}+8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} \right. \\ {}&\left. -35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}} +\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}},\tau \right\rangle \, , \end{aligned}$$

turning (2.10) into a non-degenerate equation

$$\begin{aligned} \partial _t\gamma =&V\nu +\widetilde{\Lambda }\tau \nonumber \\ =&-2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} +12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} +5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} \nonumber \\&-35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}}+\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}}\, . \end{aligned}$$
(3.1)

Moreover, we specify another tangential condition

$$\begin{aligned} \langle \partial _x^2 \gamma (y), \tau (y) \rangle =0 \qquad \text {for} y \in \{ 0,1\} \end{aligned}$$

and we notice that this together with the curvature condition, is equivalent to the second order condition

$$\begin{aligned} \partial _x^2 \gamma (y) =0 \qquad \text {for} y \in \{ 0,1\}. \end{aligned}$$

From now on, we identify the curve with its parametrization without further comments.

Definition 3.1

(Admissible initial parametrization) A map \(\gamma _0:[0,1] \rightarrow {{{\mathbb {R}}}}^2\) is an admissible initial parametrization if

  1. 1.

    it belongs to \(C^{4+\alpha }([0,1], {{{\mathbb {R}}}}^2)\) for some \(\alpha \in (0,1)\);

  2. 2.

    it satisfies the Navier boundary conditions in (2.7): attachment, curvature and third order conditions;

  3. 3.

    it satisfies the non-degeneracy condition (2.8);

  4. 4.

    it satisfies the following fourth order condition

    $$\begin{aligned} \left( V(0,y)\nu _0(y)+\widetilde{\Lambda } (0,y) \tau _0(y) \right) _2=0 \qquad \text {for} y \in \{0,1\}, \end{aligned}$$

    where \(\nu _0\), \(\tau _0\) are the normal and tangent unit vectors to \(\gamma _0\).

In the following, we refer to conditions \((2)-(4)\) in Definition 3.1 as compatibility conditions.

Definition 3.2

(Solution of the analytic problem) Let \(\gamma _0\) be an admissible initial parametrization as in Definition 3.1. A time-dependent parametrization \(\gamma _t\) for \({t\in [0,T]}\) is a solution to the analytic elastic flow with initial datum \(\gamma _0\) in the time interval [0, T] with \(T>0\), if

$$\begin{aligned} \gamma (t,x)\in C^{\frac{4+\alpha }{4}, 4+\alpha }\left( [0,T]\times [0,1],{\mathbb {R}}^2\right) , \end{aligned}$$

with \(\gamma \) regular and such that for every \(t\in [0,T], x\in [0,1]\) and \(y \in \{0,1\}\), satisfies the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\gamma = V\nu +\widetilde{\Lambda }\tau =-2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} + l.o.t. \\ \gamma (y)_2=0 \quad &{}\text {attachment conditions,} \\ \partial _x^2 \gamma (y)=0 &{}\text {second order conditions,} \\ \left( -2\partial _sk(y) \nu (y) +\mu \tau (y)\right) _1 =0 &{}\text {third order conditions,} \\ \gamma (0,\cdot )= \gamma _0(\cdot ) &{}\text {initial condition.} \end{array}\right. } \end{aligned}$$
(3.2)

3.2 Linearization

This section is devoted to proving the existence and uniqueness of solutions to the linearized system associated to (3.2). To do so, we show that the linearized system can be solved using the general theory introduced by Solonnikov in [46].

We highlight that in this section we follow closely [21]. More precisely, we adapt the arguments developed for networks in [21, Section 3.3.2 and Section 3.3.3], to the case of one curve with endpoints constrained to the x-axis.

We linearize the highest order terms of the motion equation (3.1) around the initial parametrization \(\gamma _0\) and we obtain

$$\begin{aligned} \partial _t\gamma +\frac{2}{\vert \partial _x\gamma _0 \vert ^4}\partial _x^4 \gamma =&\left( \frac{2}{\vert \partial _x\gamma _0 \vert ^4} -\frac{2}{\vert \partial _x\gamma \vert ^4}\right) \partial _x^4 \gamma +{\widetilde{f}}(\partial _x^3 \gamma ,\partial _x^2 \gamma ,\partial _x\gamma ) \nonumber \\=&:f(\partial _x^4 \gamma ,\partial _x^3 \gamma , \partial _x^2 \gamma ,\partial _x\gamma )\,. \end{aligned}$$
(3.3)

Then, after noticing that the attachment condition and the second order condition are already linear, we linearize the highest order terms of the third order condition, that is

$$\begin{aligned} \left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3} \langle \partial _x^3 \gamma , \nu _0 \rangle \nu _0 \right) _1=&\left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3} \langle \partial _x^3 \gamma , \nu _0 \rangle \nu _0 + \frac{1}{\vert \partial _x\gamma \vert ^3} \langle \partial _x^3 \gamma , \nu \rangle \nu + h (\partial _x\gamma )\right) _1 \nonumber \\=&: b(\partial _x^3 \gamma , \partial _x\gamma )\, . \end{aligned}$$
(3.4)

Thus, the linearized system associated to (3.2) is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\gamma +\frac{2}{\vert \partial _x\gamma _0 \vert ^4}\partial _x^4 \gamma =f \\ \gamma _2=0 \qquad &{}\text {attachment conditions,}\\ \partial _x^2 \gamma =0 &{}\text {second order conditions,}\\ \left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3} \left\langle \partial _x^3 \gamma ,\nu _0\right\rangle \nu _0\right) _1=b &{}\text {third order conditions,}\\ \gamma (0)=\gamma _0 &{}\text {initial condition} \end{array}\right. } \end{aligned}$$
(3.5)

where fb are defined in (3.3), (3.4) and we have omitted the dependence on \((t,x)\in [0,T]\times [0,1]\) in the motion equation, on \((t,y)\in [0,T]\times \{0,1\}\) in the boundary conditions and on \(x\in [0,1]\) in the initial condition.

Remark 3.3

Replacing the right-hand side of system (3.5) with \((f,b,\psi )\), we get the general system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\gamma +\frac{2}{\vert \partial _x\gamma _0 \vert ^4}\partial _x^4 \gamma =f \\ \gamma _2=0 \qquad &{}\text {attachment conditions,}\\ \partial _x^2 \gamma =0 &{}\text {second order conditions,}\\ \left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3} \left\langle \partial _x^3 \gamma ,\nu _0\right\rangle \nu _0 \right) _1=b &{}\text {third order conditions,}\\ \gamma (0)=\psi &{}\text {initial condition} \end{array}\right. } \end{aligned}$$
(3.6)

where \(f\in C ^{\frac{\alpha }{4},{^{\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2)\), \((b(\cdot , 0),b(\cdot ,1))\in C ^{\frac{1+\alpha }{4}}([0,T], {{{\mathbb {R}}}}^2)\) and \(\psi \in C^{4+\alpha }\left( [0,1],{\mathbb {R}}^2\right) \).

Definition 3.4

[Linear compatibility conditions] Let (fb) be a given right-hand side to the linear system (3.6). A function \(\psi \in C^{4+\alpha }\left( [0,1],{\mathbb {R}}^2\right) \) satisfies the linear compatibility conditions with respect to (fb) if for \(y\in \{0,1\}\) there hold

$$\begin{aligned}&\psi (y)_2=0 \,, \\ {}&\partial _x^2 \psi (y)=0 \,, \\ {}&\left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3}\left\langle \partial _x^3 \psi (y),\nu _0(y) \right\rangle \nu _0(y) \right) _1 =b(0,y), \\ {}&\left( \frac{2}{\vert \partial _x\gamma _0 \vert ^4}\partial _x^4\psi (y)-f(0,y)\right) _2=0 \,. \end{aligned}$$

Theorem 3.5

Let \(\alpha \in (0,1)\) and let \(T>0\). Suppose that

  • \(f\in C ^{\frac{\alpha }{4},{^{\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2)\);

  • \((b(\cdot , 0),b(\cdot ,1))\in C ^{\frac{1+\alpha }{4}}([0,T], {{{\mathbb {R}}}}^2)\,\);

  • \(\psi \in C^{4+\alpha }\left( [0,1],{\mathbb {R}}^2\right) \);

  • \(\psi \) satisfies the linear compatibility conditions in Definition 3.4 with respect to (fb).

Then, the linearized problem (3.6) has a unique solution \(\gamma \in C ^{\frac{4+\alpha }{4},{^{4+\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2)\).

Moreover, for all \(T>0\) there exists a \(C(T)>0\) such that the solution satisfies

$$\begin{aligned} \Vert \gamma \Vert _{\frac{4+\alpha }{4},^{4+\alpha }} \le C(T)\left( \Vert f\Vert _{\frac{\alpha }{4},^{\alpha }}+ \Vert b\Vert _{\frac{1+\alpha }{4}}+ \Vert \psi \Vert _{4+\alpha } \right) \,. \end{aligned}$$

Proof

To show the result we have to prove that system (3.6) satisfies all the hypothesis of the general [46, Theorem 4.9].

Using the notation of [46], we write \(\gamma =(u,v)\) and we denote by br, respectively, the number of boundary and initial conditions which in our case are \(b=2\), \(r=2\).

Moreover, we write the motion equation in the form

$$\begin{aligned} {\mathcal {L}} \gamma = f \end{aligned}$$
(3.7)

where the \(2 \times 2\) matrix \({\mathcal {L}}\) is given by

$$\begin{aligned} {\mathcal {L}} (x,t, \partial _x, \partial _t)=\begin{bmatrix} \partial _t+\frac{2}{\vert \partial _x\gamma _0\vert ^4}\partial ^4_x &{} 0\\ 0 &{} \partial _t+\frac{2}{\vert \partial _x\gamma _0\vert ^4}\partial ^4_x \end{bmatrix} \end{aligned}$$

and the vector \(f= (f^1, f^2)\) is the right-hand side of motion equation in system (3.6).

  • We firstly show that system (3.7) satisfies the parabolicity condition [46, page 8]. As in [46], we call \(\mathcal {L}_0\) the principal part of the matrix \({\mathcal {L}}\) and we choose the integers \(s_k,t_j\) in [46, page 8] as follows: \(s_k=4\) for \(k \in \{1,2\}\) and \(t_j=0\) for \(j \in \{1,2\}\). Hence, we have \(\mathcal {L}_0=\mathcal {L}\) and its determinant

    $$\begin{aligned} \textrm{det}\mathcal {L}_0(x,t,i\xi ,p)= \left( \frac{2}{\vert \partial _x\gamma _0 \vert }\xi ^4+p\right) ^2 \end{aligned}$$

    is a polynomial of degree two in p with one root

    $$\begin{aligned} p=-\frac{2}{\vert \partial _x\gamma _0 \vert ^4}\xi ^4 \end{aligned}$$

    of multiplicity two. Then, choosing \(\delta \le \frac{2}{\vert \partial _x\gamma _0 \vert ^4} \), the conditions of [46, page 8] are satisfied and the system is parabolic in the sense of Solonnikov.

  • As it is shown in [52, pages 11-15], the compatibility condition at boundary points stated in [46, page 11] is equivalent to the following Lopatinskii-Shapiro condition, which we check only for \(y=0\) (the case \(y=1\) can be treated analogously). Let \(\lambda \in {\mathbb {C}}\) with \( \Re (\lambda )>0\) be arbitrary. The Lopatinskii-Shapiro condition at y is satisfied if every solution \(\gamma \in C^4([0,\infty ),{\mathbb {C}}^2)\) to the system of ODEs

    $$\begin{aligned} {\left\{ \begin{array}{ll} \lambda \gamma (x)+\frac{1}{\vert \partial _x\gamma _0 \vert ^4}\partial _x^4 \gamma (x)=0\\ \gamma (y)_2=0 \\ \partial _x^2 \gamma (y)=0 \\ \left( \frac{1}{\vert \partial _x\gamma _0 \vert ^3} \left\langle \partial _x^3\gamma (y),\nu _0(y)\right\rangle \nu _0(y)\right) _1=0 \\ \end{array}\right. } \end{aligned}$$
    (3.8)

    where \(x \in [0,\infty )\), which satisfies \(\lim _{x\rightarrow \infty }|\gamma (x)|=0\), is the trivial solution. To do so, we consider a solution \(\gamma \) to (3.8) such that \(\lim _{x\rightarrow \infty }|\gamma (x)|=0\). We test the motion equation by \(\vert \partial _x\gamma _0\vert \left\langle \overline{\gamma }(x),\nu _0\right\rangle \nu _0\) and we integrate twice by part to get

    $$\begin{aligned} 0=&\lambda \vert \partial _x\gamma _0\vert \int _0^\infty \vert \left\langle \gamma (x), \nu _0\right\rangle \vert ^2 \,\textrm{d}x +\frac{1}{\vert \partial _x\gamma _0\vert ^3} \int _0^\infty \vert \left\langle \partial _x^2 \gamma (x),\nu _0\right\rangle \vert ^2\,\textrm{d}x \nonumber \\&+\frac{1}{\vert \partial _x\gamma _0\vert ^3}\left\langle \overline{\gamma }(0),\nu _0\right\rangle \left\langle \partial _x^3 \gamma (0),\nu _0 \right\rangle -\frac{1}{\vert \partial _x\gamma _0\vert ^3} \left\langle \overline{\partial _x\gamma }(0),\nu _0\right\rangle \left\langle \partial _x^2 \gamma (0),\nu _0 \right\rangle \,, \end{aligned}$$
    (3.9)

    where we have already used the fact that all derivatives decay to zero for x tending to infinity, due to the specific exponential form of the solutions to (3.8). We now observe that, since \(\gamma _0\) is an admissible initial parametrization, the first component of \(\nu _0\) is bounded from below. That is, from the third order condition in system (3.8) it follows that \(\left\langle \partial _x^3 \gamma (0),\nu _0 \right\rangle =0\). Thus, this condition together with the second order condition implies that the boundary terms in (3.9) vanish. Then, taking the real part of (3.9) and recalling that \( \Re (\lambda )>0\), we have \(\left\langle \gamma (x),\nu _0 \right\rangle =0\) for all \(x \in [0, \infty )\). In particular, from the attachment condition in (3.8), it follows that \(\gamma (0)=0\). As before, testing the motion equation by \(\vert \partial _x\gamma _0\vert \left\langle \overline{\gamma }(x),\tau _0\right\rangle \tau _0\) and integrating by part, we get

    $$\begin{aligned} 0=&\lambda \vert \partial _x\gamma _0\vert \int _0^\infty \vert \left\langle \gamma (x), \tau _0\right\rangle \vert ^2 \,\textrm{d}x +\frac{1}{\vert \partial _x\gamma _0\vert ^3} \int _0^\infty \vert \left\langle \partial _x^2 \gamma (x),\tau _0\right\rangle \vert ^2\,\textrm{d}x\nonumber \\&+\frac{1}{\vert \partial _x\gamma _0\vert ^3}\left\langle \overline{\gamma }(0),\tau _0\right\rangle \left\langle \partial _x^3 \gamma (0),\tau _0 \right\rangle -\frac{1}{\vert \partial _x\gamma _0\vert ^3} \left\langle \overline{\partial _x\gamma }(0),\tau _0\right\rangle \left\langle \partial _x^2 \gamma (0),\tau _0 \right\rangle \,. \end{aligned}$$
    (3.10)

    The boundary terms in (3.10) vanishe since \(\gamma (0)=0\) and the second order condition holds. Hence, considering again the real part of (3.10) we have that \(\left\langle \gamma (x), \tau _0\right\rangle =0\) for all \(x \in [0,\infty ).\) So, we conclude that \(\gamma (x)=0\) for all \(x \in [0,\infty )\).

  • Finally, to check the complementary condition for the initial datum stated in [46, page 12], we observe that the \(2\times 2\) matrix \([C_{\alpha j}]\) is the identity matrix. Then, choosing \(\gamma _{\alpha j}=0\) for \(\alpha \in \{1,2\}\) and \(j\in \{1,2\}\), we obtain \(\rho _\alpha =0\) and \(C_0=Id\). Moreover, the rows of the matrix \(\mathcal {D}(x,p)=\hat{\mathcal {L}_0}(x,0,0,p)=p Id\) are linearly independent modulo the polynomial \(p^2\).\(\square \)

3.3 Short-time existence of the analytic problem

From now on, we fix \(\alpha \in (0,1)\) and we consider an admissible initial parametrization \(\gamma _0\) as in Definition 3.1, with \(\Vert \gamma _0 \Vert _{4+\alpha }=R\). Moreover, with a slight abuse of notation, we denote by \(b(\cdot )\) the vector \((b( \cdot , 0), b(\cdot , 1))\) in the statement of Theorem (3.5).

Definition 3.6

For \(T>0\) we define the linear spaces

$$\begin{aligned} {\mathbb {E}}_T:=\{&\gamma \in C ^{\frac{4+\alpha }{4},{^{4+\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2) \;\text {such that for}\;t\in [0,T]\,, \\ {}&\text {attachment and second order conditions hold}\} \,,\\ {\mathbb {F}}_T:=\{&(f,b,\psi )\in C ^{\frac{\alpha }{4},{^{\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2) \times C ^{\frac{1+\alpha }{4}}([0,T], {{{\mathbb {R}}}}^2) \times C^{4+\alpha }\left( [0,1],{\mathbb {R}}^2\right) \\&\text {such that the linear compatibility conditions hold} \} \,, \end{aligned}$$

endowed with the norms

$$\begin{aligned} \Vert \gamma \Vert _{{\mathbb {E}}_T}=&\Vert \gamma \Vert _{\frac{4+\alpha }{4},4+\alpha } \, , \\\Vert (f,b,\psi ) \Vert _{{\mathbb {F}}_T}&= \Vert f \Vert _{\frac{\alpha }{4}, \alpha }+ \Vert b \Vert _{\frac{1+\alpha }{4}}+ \Vert \psi \Vert _{4+\alpha }. \end{aligned}$$

Moreover, we consider the affine spaces

$$\begin{aligned} {\mathbb {E}}^0_T:=\{&\gamma \in {\mathbb {E}}_T\,\text {such that }\,\gamma _{\vert t=0}=\gamma _0\} \,,\\ {\mathbb {F}}^0_T:=\{&(f,b)\,\text {such that }\, (f,b,\gamma _0)\in {\mathbb {F}}_{T}\} \times \{\gamma _0\} \,. \end{aligned}$$

We remark that Lemma 3.7 and Lemma 3.8 below are respectively [21, Lemma 3.17] and [21, Lemma 3.23].

Lemma 3.7

For \(T>0\), the map \(L_{T}:{\mathbb {E}}_T\rightarrow {\mathbb {F}}_T\) defined by

$$\begin{aligned} L_{T}(\gamma ):= \begin{pmatrix} \partial _t\gamma +\frac{2}{\vert \partial _x\gamma _0\vert ^4}\partial _x^4 \gamma \\ \left( -\frac{1}{\vert \partial _x\gamma _0 \vert ^3} \left\langle \partial _x^3\gamma ,\nu _0\right\rangle \nu _0 \right) _1\\ \gamma _0 \end{pmatrix} \,, \end{aligned}$$

is a continuous isomorphism.

In the following we denote by \(L^{-1}_T\) the inverse of \(L_T\), by \(B_M\) the open ball of radius \(M>0\) and center 0 in \({\mathbb {E}}_T\) and by \(\overline{B_M}\) its closure.

Before proceeding we notice that, since the admissible initial parametrization \(\gamma _0: [0,1] \rightarrow {{{\mathbb {R}}}}^2\) is a regular curve, there exists a constant \(C>0\) such that

$$\begin{aligned} \inf _{x\in [0,1]}\vert \partial _x\gamma _0 \vert \ge C, \end{aligned}$$
(3.11)

which obviously implies that

$$\begin{aligned} \sup _{x\in [0,1]}\frac{1}{\vert \partial _x\gamma _0 \vert } \le \frac{1}{C}. \end{aligned}$$

Then, as it is shown in [21], there exists a constant \({\widetilde{C}}\) depending on R and C, such that for every \(j\in {\mathbb {N}}\) it holds

$$\begin{aligned} \left\Vert \frac{1}{\vert \partial _x\gamma _0\vert ^j} \right\Vert _{\alpha } \le \left( \frac{\Vert \partial _x\gamma _0 \Vert _{\alpha }}{C^2}\right) ^j \le \left( \frac{R}{C^2}\right) ^j\qquad \text {and} \qquad \left\Vert \frac{1}{\vert \partial _x\gamma _0 \vert ^j}\right\Vert _{1+\alpha } \le {\widetilde{C}}(R,C)\,. \end{aligned}$$

We also notice that these estimates are preserved during the flow. More precisely, following the proof in [21], one can show that there exists \({\widetilde{T}}(M, C)\in (0,1]\) such that for \(T\in [0,{\widetilde{T}}(M,C)]\) every curve \(\gamma \in {\mathbb {E}}^0_T\cap B_M\) is regular and for all \(t\in [0,{\widetilde{T}}(M,C)]\) it holds

$$\begin{aligned} \sup _{x\in [0,1]} \frac{1}{\vert \partial _x\gamma (t,x)\vert }\le \frac{2}{C}. \end{aligned}$$

Furthermore, for every \(j \in {\mathbb {N}}\) and \(y \in \{0,1\}\), we have

$$\begin{aligned} \left\Vert \frac{1}{\vert \partial _x\gamma \vert ^j}\right\Vert _{\frac{\alpha }{4},\alpha } \le \left( \frac{4M}{C^2}\right) ^j \qquad \text {and} \qquad \left\Vert \frac{1}{\vert \partial _x\gamma (y)\vert ^j}\right\Vert _{\frac{1+\alpha }{4}} \le {\widetilde{C}}(R,C). \end{aligned}$$

Lemma 3.8

For \(T\in (0, {\widetilde{T}}(M, C)]\), the map \(N_T(\gamma ):= (N_{T,1}, N_{T,2}, \gamma _0)\) given by

$$\begin{aligned} N_{T,1}:&{\left\{ \begin{array}{ll} {\mathbb {E}}^0_T &{}\rightarrow C ^{\frac{\alpha }{4}, {^{\alpha }}}([0,T]\times [0,1],{\mathbb {R}}^2),\\ \gamma &{}\mapsto f(\gamma ):=f(\partial _x^4 \gamma , \partial _x^3 \gamma , \partial _x^2 \gamma , \partial _x\gamma ), \end{array}\right. }\\ N_{T,2}:&{\left\{ \begin{array}{ll} {\mathbb {E}}^0_T&{}\rightarrow C^{\frac{1+\alpha }{4}}([0,T],{\mathbb {R}}^2), \\ \gamma &{}\mapsto b(\gamma ):= b(\partial _x^3 \gamma , \partial _x\gamma ) \end{array}\right. } \end{aligned}$$

where fb are defined in (3.3), (3.4) respectively, is a well defined mapping from \({\mathbb {E}}^0_T\) to \({\mathbb {F}}^0_T\).

Proof

We have that \(\gamma (t,\cdot )\) is a regular curve thanks to the discussion above, hence \(N_T\) is well defined. In order to show that \(N_T(\gamma ) \in {\mathbb {F}}^0_T\), we have to prove that \(\gamma _0\) satisfies the linear compatibility conditions with respect to \((N_{T,1}, N_{T,2})\). This easily follows from the definition of \(N_{T,1},N_{T,2}\) and the fact that \(\gamma _0\) is an admissible initial parametrization as in Definition 3.1. \(\square \)

Definition 3.9

Let \(\gamma _0\) be an admissible initial parametrization and let \(C>0\) the constant given by (3.11). For \(M>0\) and \(T \in (0, {\widetilde{T}}(M, C)\) we define the mapping \(K_T:{\mathbb {E}}^0_T \rightarrow {\mathbb {E}}^0_T\) as \(K_T:= L^{-1}_T N_T\).

With a proof similar to [21, Proposition 3.28 and Proposition 3.29] one can prove the following result.

Proposition 3.10

There exists a positive radius \(M=M(R,C)\) and a positive time \({\hat{T}}(M) \in (0, {\widetilde{T}}(M,C))\) such that for all \(T\in (0,{\hat{T}}(M)]\) the map \(K_T:{\mathbb {E}}^0_T\cap \overline{B_{M}}\rightarrow {\mathbb {E}}^0_T\cap \overline{B_{M}}\) is well-defined and it is a contraction.

Theorem 3.11

Let \(\gamma _0\) be an admissible initial parametrization as in Definition 3.1. There exists a positive radius M and a positive time T such that the system (3.2) has a unique solution in \(C^{\frac{4+\alpha }{4},4+\alpha }\left( [0,T]\times [0,1]\right) \cap \overline{B_M}\).

Proof

Let M and \({\hat{T}}(M)\) be the radius and time as in Proposition 3.10 and let \(T\in (0,{\hat{T}}(M)]\). The solutions of (3.2) in \(C^{\frac{4+\alpha }{4},4+\alpha }\left( [0,T]\times [0,1]\right) \cap \overline{B_M}\) are the fixed points of \(K_T\) in \({\mathbb {E}}_T^0\cap \overline{B_M}\). Moreover, it is unique by the Banach-Caccioppoli contraction theorem as \(K_T\) is a contraction of the complete metric space \({\mathbb {E}}_T^0\cap \overline{B_M}\). \(\square \)

3.4 Geometric existence and uniqueness

In Theorem 3.11 we show that there exists a unique solution to the analytic problem (3.2) provided that the initial curve is admissible. In this section, we first establish a relation between geometrically admissible initial curves and admissible initial parametrizations, then we show the geometric uniqueness of the flow, in the sense that up to reparametrization the geometric problem (2.9) has a unique solution.

We remark that the following technique was introduced by Garcke and Novick-Cohen in [23], and then it has been employed, for instance, by Garke, Pluda at al. in [21, 22, 24] for the case of shortening and elastic flows of networks.

Lemma 3.12

Suppose that \(\gamma _0\) is a geometrically admissible initial curve as in Definition 2.2. Then, there exists a smooth function \(\psi _0:[0,1]\rightarrow [0,1]\) such that the reparametrization \(\widetilde{\gamma }_0 = \gamma _0 \circ \psi _0\) of \(\gamma _0\) is an admissible initial parametrization for the analytic problem (3.2).

Proof

We look for a smooth map \(\psi _0:[0,1]\rightarrow [0,1]\) with \(\partial _x\psi _0 (x)\ne 0\) for every \(x\in [0,1]\), such that \(\widetilde{\gamma }_0 = \gamma _0 \circ \psi _0: [0,1] \rightarrow {{{\mathbb {R}}}}^2\) is regular and of class \(C^{4+\alpha }([0,1])\). If \(\psi _0(y)=y\) for \(y \in \{0,1\}\), then \(\widetilde{\gamma }_0\) clearly satisfies the attachment condition. Moreover, since the geometric quantities are invariant under reparametrization, also the non-degeneracy condition and the third-order condition are still satisfied. In order to fulfil the second order condition \(\partial _x^2 \widetilde{\gamma }_0(y)=0\), we consider a map \(\psi _0\) such that

$$\begin{aligned} \partial _x\psi _0(y)=1 \qquad \text {and} \qquad \partial _x^2 \psi _0 (y)=-\frac{\partial _x^2 \gamma _0 (y)}{\partial _x\gamma _0(y)} \end{aligned}$$

for \(y\in \{0,1\}\). Thus, it remains to show that

$$\begin{aligned} \left( {{\widetilde{V}}}_0 \widetilde{\nu }_0 + {{\widetilde{T}}}_0 \widetilde{\tau }_0 \right) _2 =0 \,. \end{aligned}$$

As we notice above, this is equivalent to

$$\begin{aligned} \left( V_0 \nu _0 + {{\widetilde{T}}}_0 \tau _0 \right) _2 =0 \,, \end{aligned}$$

however, since \(\gamma _0\) is a geometrically admissible initial curve, it is enough to prove that

$$\begin{aligned} {{\widetilde{T}}}_0 - T_0=0 \,. \end{aligned}$$
(3.12)

Thus, asking that \(\partial _x^3 \psi _0(y) =1\), we rewrite relation (3.12) as

$$\begin{aligned} g_1 (\partial _x\gamma ) (y) \partial _x^4 \psi _0 (y) + g_2 (\partial _x\gamma , \partial ^2 \gamma , \partial _x^3 \gamma )(y)=0 \end{aligned}$$

where \(g_1, g_2\) are non-linear functions. Hence, \(\partial _x^4 \psi _0(y)\) are uniquely determined for \(y \in \{0,1\}\). In the end, we may choose \(\psi _0\) to be the fourth Taylor polynomial near each boundary point, join these values up inside the interval (0, 1) and then make it smooth. \(\square \)

Definition 3.13

Let \(\gamma _0\) be a geometrically admissible initial curve as in Definition 2.2 and \(T>0\). A time-dependent family of curves \(\gamma _t\) for \(t \in [0,T)\) is a maximal solution to the elastic flow with initial datum \(\gamma _0\), if it is a solution in the sense of Definition 2.3 in \([0, {{\hat{T}}}]\) for some \({{\hat{T}}} < T\) and if there does not exist a solution \(\widetilde{\gamma }_t\) in \([0, {{\widetilde{T}}}]\) with \({{\widetilde{T}}} > T\) and such that \(\gamma =\widetilde{\gamma }\) in (0, T).

Following the arguments in [22, Lemma 5.8 and Lemma 5.9], one can show that a maximal solution to the elastic flow always exists and it is unique up to reparametrization. Hence, from now on we only consider the time T in Definition 3.13, which we call maximal time of existence and we denote by \(T_{\max }\).

We notice that the following theorem is slightly different to the corresponding one in [22], where the authors firstly prove the geometric uniqueness in a “generic” time interval [0, T] and then they show the existence of \(T_{\max }\) using the fact that the solution is unique in a geometric sense. However, with an intermediate step, the result can be stated as follows.

Theorem 3.14

[Geometric existence and uniqueness] Let \(\gamma _0\) be a geometrically admissible initial curve as in Definition 2.2. Then, there exists a positive time \(T_{\max }\) such that within the time interval \([0,T_{\max })\) there is a unique elastic flow \(\gamma _t\) in the sense of Definition 2.3.

Proof

By Lemma 3.12 there exists a reparametrization \(\widetilde{\gamma }_0\) of \(\gamma _0\) which is an admissible initial parametrization in the sense of Definition 3.1. Then, by Theorem 3.11 there exists a solution \(\widetilde{\gamma }_t\) of system (3.2) in some maximal time interval \([0, {{\widetilde{T}}}_{\max }]\). In particular, \(\widetilde{\gamma }_t\) is a solution to system (2.9).

Let us suppose that \(\gamma _t\) is another solution to the elastic flow in sense of Definition 2.3 in a time interval \([0, T']\), with the same geometrically admissible initial curve. We aim to show that there exists a time \(T_{\max } \in (0, \max \{{{\widetilde{T}}}_{\max }, T'\})\) such that \(\widetilde{\gamma }_t = \gamma _t\) (as curves) for every \(t \in [0, T_{\max }]\).

To be precise, we need to construct a regular reparametrization \(\psi (t,x): [0, T_{\max }] \times [0,1] \rightarrow [0,1]\), such that the reparametrized curve \(\sigma (t,x) = \gamma (t, \psi (t,x))\) is a solution to the analytic problem (3.2) and coincides with \(\widetilde{\gamma }_t\) in a possibly small but positive time interval. Hence, computing the space and time derivatives of \(\sigma (t,x)\) as a composed function and replacing in the evolution equation

$$\begin{aligned} \partial _t\sigma (t,x) = \frac{\partial _x^4 \sigma }{\vert \partial _x\sigma \vert ^4}+ l.o.t. \end{aligned}$$

we get the following evolution equation for \(\psi \)

$$\begin{aligned} \partial _t\psi (t,x) =&- \frac{ \langle \partial _t\gamma (t, \psi (t,x)) ,\partial _x\gamma (t, \psi (t,x)) \rangle }{\vert \partial _x\gamma (t, \psi (t,x)) \vert ^2}\\&+\frac{\langle \partial _x^4 \gamma (t, \psi (t,x)),\partial _x\gamma (t, \psi (t,x)) \rangle }{\vert \partial _x\gamma (t, \psi (t,x) \vert ^6} \\ {}&+ \frac{6 \langle \partial _x^3 \gamma (t, \psi (t,x)), \partial _x\gamma (t, \psi (t,x)) \rangle \partial _x^2 \psi (t,x)}{\vert \partial _x\gamma (t, \psi (t,x) \vert ^6 (\partial _x\psi (t,x))^2}\\&+\frac{3 \langle \partial _x^2 \gamma (t, \psi (t,x)), \partial _x\gamma (t, \psi (t,x)) \rangle (\partial _x^2 \psi (t,x))^2}{\vert \partial _x\gamma (t, \psi (t,x) \vert ^6 (\partial _x\psi (t,x))^4} \\ {}&+\frac{ 4 \langle \partial _x^2 \gamma (t, \psi (t,x)), \partial _x\gamma (t, \psi (t,x)) \rangle \partial _x^3 \psi (t,x)}{\vert \partial _x\gamma (t, \psi (t,x) \vert ^6 (\partial _x\psi (t,x))^3}\\&+ \frac{\partial _x^4 \psi (t,x)}{\vert \partial _x\gamma (t, \psi (t,x) \vert ^2 (\partial _x\psi (t,x))^4}+ l.o.t. \, . \end{aligned}$$

Taking into account the boundary conditions, we have that such parametrization has to satisfy the following boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\psi (t,x) =\frac{\partial _x^4 \psi (t,x)}{\vert \partial _x\gamma (t, \psi (t,x) \vert ^2 (\partial _x\psi (t,x))^4} + g \\ \psi (t, y)=y \\ \partial _x^2 \psi (t,y) = - \frac{\langle \partial _x^2 \gamma (t, \psi (t,x)), \partial _x\gamma (t, \psi (t,x)) \rangle (\partial _x\psi )^2}{ \vert \partial _x\gamma (t, \psi (t,x))\vert ^2} \\ \psi (0,x)=\psi _0(x) \end{array}\right. } \end{aligned}$$
(3.13)

for \(y \in \{0,1\}\) and \(t \in [0, T_{\max }]\), where the function \(\psi _0\) is given by Lemma 3.12 and the terms in g depend on the solution \(\psi \), \(\partial _x^j \psi \) for \(j \in \{1,2,3\}\) and \(\partial _t\gamma \), \(\partial _x^j \gamma \) for \(j \in \{1,2,3,4\}\). From the computation above, it follows that the function \(\gamma \) and its time-space derivatives depend also on \(\psi \). To remove this dependence, we consider the associated problem for the inverse of \(\psi \), that is \(\xi (t, \cdot ) = \psi ^{-1} (t, \cdot )\). So, the differentiation rules

$$\begin{aligned} \partial _z\xi (t,z)&= \partial _x\psi (t, \xi (t, z))^{-1} \\\partial _z^2 \xi (t,z)&= - (\partial _z\xi (t,z))^3 \partial _x^2 \psi (t,\xi (t,z)) \\\partial _z^3 \xi (t,z)&= 3 \frac{(\partial _z^2 \xi (t,z))^2}{\partial _z\xi (t,z)}- (\partial _z\xi (t,z))^4 \partial _x^3 \psi (t,\xi (t,z)) \\\partial _z^4 \xi (t,z)&=- 15 \frac{ (\partial _z^2 \xi (t,x))^3}{(\partial _z\xi (t,x))^2} +10\frac{\partial _z^2 \xi (t,z) \partial _z^3 \xi (t,z)}{\partial _z\xi (t,z)}- (\partial _z\xi (t,z))^5\partial _x^4 \psi (t, \xi (t,z)) \end{aligned}$$

yield the evolution equation

$$\begin{aligned} \partial _t\xi (t,z) =&- \frac{ \langle \partial _t\sigma (t, z) ,\partial _z\sigma (t, z) \rangle }{\vert \partial _z\sigma (t, z) \vert ^2} \partial _z\xi (t,z) +\frac{\langle \partial _z^4 \sigma (t, z),\partial _z\sigma (t, z) \rangle }{\vert \partial _z\sigma (t, z) \vert ^6} \partial _z\xi (t,z) \\ {}&- \frac{6 \langle \partial _z^3 \sigma (t, z), \partial _z\sigma (t, z) \rangle }{\vert \partial _z\sigma (t, z) \vert ^6 } \partial _z^2 \xi (t,z) +\frac{3 \langle \partial _z^2 \sigma (t, z), \partial _z\sigma (t, z) \rangle }{\vert \partial _z\sigma (t, z) \vert ^6 } \frac{(\partial _z^2 \xi (t,z))^2}{\partial _z\xi (t,z)} \\ {}&+\frac{ \langle \partial _z^2 \sigma (t, z), \partial _z\sigma (t, z) \rangle }{\vert \partial _z\sigma (t, z) \vert ^6 } \left( -4 \partial _z^3 \xi (t,z) + \frac{12 (\partial _z^2\xi (t,z))^2}{\partial _z\xi (t,z)}\right) \\ {}&+ \frac{1}{\vert \partial _z\sigma (t, z) \vert ^2 } \left( - \partial _z^4 \xi (t,z) +\frac{10 \partial _z^2 \xi (t,z) \partial _z^3 \xi (t,z)}{\partial _z\xi (t,z)} - \frac{15(\partial _z^2 \xi (t,z))^3}{(\partial _z\xi (t,z))^2}\right) + l.o.t. \, . \end{aligned}$$

Hence, we obtain the following system for \(\xi \)

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\xi (t,z) =-\frac{ \partial _z^4 \xi (t,z)}{\vert \partial _z\sigma (t, z )\vert ^2 } +g \\ \xi (t,y)= y \\ \partial _z^2 \xi (t,y)= \frac{\langle \partial _z^2 \sigma (t,y), \partial _z\sigma (t,y) \rangle \partial _z\xi (t,y)}{\vert \partial _z\sigma (t,y) \vert ^2} \\ \xi (0,z)= \psi _0^{-1}(z) \end{array}\right. } \end{aligned}$$
(3.14)

where g is a non-linear smooth function which depends on \(\partial _x^j \xi \) for \(j \in \{1,2,3\}\), \(\partial _z^ \sigma \) for \(j \in \{1, 2,3,4\}\), \(\partial _t\sigma \). We now observe that the system (3.14) has a very similar structure as (3.6), hence, after linearize, we apply the linear theory developed by Solonnikov in [46] and we get well-posedness. Contraction estimates allow us to conclude the existence and uniqueness of solution with a fixed-point argument. Reversing the above argumentation, we obtain that the function \(\psi \) solves system (3.13).

Then, \(\sigma _t\) is a solution to system (3.2). Indeed, the motion equation follows from (3.13) and the geometric evolution of \(\gamma _t\) in normal direction. The geometric boundary conditions, namely attachment, curvature, and third-order conditions, are satisfied as \(\gamma _t\) is a solution to the geometric problem. Moreover, the boundary conditions in system (3.13) ensure that \(\sigma _t\) satisfies the second order condition.

Thus, by uniqueness of the analytic problem proved in Theorem 3.11, \(\sigma _t\) (that is \(\gamma _t\) up to reparametrization) and \(\widetilde{\gamma }_t\) need to coincide on a possibly small time interval. \(\square \)

4 Curvature bounds

To simplify the notation, we introduce the following polynomials.

Definition 4.1

For \(h \in {\mathbb {N}}\), we denote by \({\mathfrak {p}}_\sigma ^h(k)\) a polynomial in \(k,\dots ,\partial _s^h k\) with constant coefficients in \({\mathbb {R}}\) such that every monomial it contains is of the form

$$\begin{aligned} C \prod _{l=0}^h (\partial _s^lk)^{\alpha _l}\quad \text { with} \quad \sum _{l=0}^h(l+1)\alpha _l = \sigma , \end{aligned}$$

where \(\alpha _l\in {\mathbb {N}}\) for \(l\in \{0,\dots ,h\}\) and \(\alpha _{l_0}\ge 1\) for at least one index \(l_0\).

Remark 4.2

One can easily prove that

$$\begin{aligned} \partial _s\left( {\mathfrak {p}}_\sigma ^h( k)\right)&={\mathfrak {p}}_{\sigma +1}^{h+1}( k)\,,\nonumber \\ \mathfrak {p}_{\sigma _1}^{h_1}(k)\mathfrak {p}_{\sigma _2}^{h_2} (k)&=\mathfrak {p}_{\sigma _1+\sigma _2}^{\max \{h_1,h_2\}}(k)\,, \\ \mathfrak {p}_\sigma ^{h_1}(k)+ \mathfrak {p}_\sigma ^{h_2}(k)&= \mathfrak {p}_\sigma ^{\max \{h_1,h_2\}}(k). \end{aligned}$$

Moreover, following the arguments in [32], it holds

$$\begin{aligned} \partial _t\left( {\mathfrak {p}}_{\sigma }^h (k) \right) = {\mathfrak {p}}_{\sigma +4}^{h+4} (k)+ \Lambda {\mathfrak {p}}_{\sigma +1}^{h+1} (k)+ \mu {\mathfrak {p}}_{\sigma +2}^{h+2} (k) \,. \end{aligned}$$
(4.1)

Lemma 4.3

If \(\gamma \) satisfies (2.10), then for any \(j \in {\mathbb {N}}\) the j-th derivative of scalar curvature of \(\gamma \) satisfies

$$\begin{aligned} \partial _t\partial _s^j k =&-2\partial _{s}^{j+4}k -5k^2\partial _s^{j+2}k +\mu \,\partial _s^{j+2}k +\Lambda \partial _{s}^{j+1}k +\mathfrak {p}_{j+5}^{j+1}\left( k\right) + \mu \,\mathfrak {p}_{j+3}^{j}(k) \, . \end{aligned}$$
(4.2)

Proof

For \(j=0\) we have

$$\begin{aligned}\partial _tk =&-2\partial _{s}^{4}k-5k^{2}\partial _{s}^{2}k-6k\left( \partial _{s}k\right) ^{2} +\Lambda \partial _{s}k-k^{5}+\mu \left( \partial _{s}^{2}k+k^3\right) \\=&-2\partial _{s}^{4}k-5k^{2}\partial _{s}^{2}k + \mu \partial _s^2 k + \Lambda \partial _sk + {\mathfrak {p}}_5^1(k) + \mu {\mathfrak {p}}_4^0(k)\,. \end{aligned}$$

Then, assuming that relation (4.2) is true for j, we show that

$$\begin{aligned} \partial _t\partial _s^{j+1} k =&\partial _t\partial _s\partial _s^{j} k = \partial _s\partial _t\partial _s^{j} k + (kV-\partial _s\Lambda ) \partial _s^{j+1} k \nonumber \\=&-2 \partial _s^{j+5} k -10 k \partial _sk \partial _s^{j+2}k -5 k^2 \partial _s^{j+3} k + \mu \partial _s^{j+3} k + \partial _s\Lambda \partial _s^{j+1} k + \Lambda \partial _s^{j+2}k \nonumber \\ {}&+ {\mathfrak {p}}_{j+6}^{j+2} + \mu {\mathfrak {p}}_{j+4}^{j+1} -2k \partial _s^2 k \partial _s^{j+1}k - k^4 \partial _s^{j+1}k + \mu k^2 \partial _s^{j+1}k - \partial _s\Lambda \partial _s^{j+1}k \nonumber \\=&-2\partial _{s}^{j+5}k -5k^2\partial _s^{j+3}k +\mu \,\partial _s^{j+3}k +\Lambda \partial _{s}^{j+2}k +{\mathfrak {p}}_{j+6}^{j+2}\left( k\right) + \mu \,{\mathfrak {p}}_{j+4}^{j+1}(k)\, . \end{aligned}$$

By induction, formula (4.2) holds for any \(j \in {\mathbb {N}}\). \(\square \)

4.1 Bound on \(\Vert {\partial _s^2 k }\Vert _{L^2}\)

We aim to show that, once the following condition is satisfied, the tangential velocity behaves as the normal velocity at boundary points.

Definition 4.4

Let \(\gamma _t\) be a maximal solution to the elastic flow in \([0,T_{\max })\). We say that \(\gamma _t\) satisfies the uniform non-degeneracy condition if there exists \(\rho >0\) such that

$$\begin{aligned} \tau _2(y) \ge \rho \end{aligned}$$
(4.3)

for every \(t \in [0,T_{\max })\) and \(y \in \{0,1\}\).

Lemma 4.5

Let \(\gamma _t\) be a maximal solution to the elastic flow of curves subjected to boundary conditions (2.7), such that the uniform non-degeneracy condition (4.3) holds in \([0,T_{\max })\). Then, for every \(t \in [0,T_{\max })\) and \(y \in \{0,1\}\), the tangential velocity is proportional to the normal velocity, that is

$$\begin{aligned} \Lambda (y)\approx \partial _s^2 k (y) \,. \end{aligned}$$

Proof

Since the boundary points are constrained to the x-axis, we have that

$$\begin{aligned} (\partial _t\gamma _t)_2 (y)=-2 \partial _s^2 k (y) \nu _2 (y)+\Lambda (y) \tau _2(y)=0 \end{aligned}$$

for \(y \in \{0,1\}\) and \(t \in [0,T_{\max })\). By the fact that \(\tau _2\) (hence, \(\nu _2\)) are bounded from below at boundary points, it follows

$$\begin{aligned} \Lambda (y) = 2 \partial _s^2 k (y) \dfrac{\nu _2(y)}{\tau _2(y)} \approx \partial _s^2 k (y) \end{aligned}$$

for \(t \in [0,T_{\max })\) and \(y \in \{0,1\}\). \(\square \)

Proposition 4.6

Let \(\gamma _t\) be a maximal solution to the elastic flow of curves subjected to boundary conditions (2.7) with initial datum \(\gamma _0\), which satisfies the uniform non-degeneracy condition (4.4) in the maximal time interval \([0,T_{\max })\). Then, for all \(t\in [0,T_{\max })\), it holds

$$\begin{aligned} \frac{d}{dt}\int _{\gamma } \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s \le C ( \mathcal {E}(\gamma _0)) \,. \end{aligned}$$

Proof

From formula (4.2) we have

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s =&\int _\gamma 2 \partial _s^2 k \partial _t\partial _s^2 k + (\partial _s^2 k )^2 (\partial _s\Lambda - kV){\,\textrm{d} }s \nonumber \\=&\int _\gamma - 4 \partial _s^2 k \partial _s^6 k - 10 k^2 \partial _s^2 k \partial _s^4 k +2 \mu \partial _s^2 k \partial _s^4 k + 2 \Lambda \partial _s^3 k \partial _s^2 k \nonumber \\ {}&+ {\mathfrak {p}}_{10}^3 (k) + \mu {\mathfrak {p}}_8 ^2 (k) + (\partial _s^2 k )^2 (\partial _s\Lambda - kV) \, {\,\textrm{d} }s \nonumber \\=&\int _\gamma - 4 \partial _s^2 k \partial _s^6 k - 10 k^2 \partial _s^2 k \partial _s^4 k +2 \mu \partial _s^2 k \partial _s^4 k \nonumber \\&+ 2 \Lambda \partial _s^3 k \partial _s^2 k +2 \partial _s\Lambda (\partial _s^2 k)^2 + {\mathfrak {p}}_{10}^3 (k) + \mu {\mathfrak {p}}_8 ^2 (k) {\,\textrm{d} }s\, . \end{aligned}$$

Thus, the terms involving the tangential velocity can be written as

$$\begin{aligned} \int _\gamma \partial _s\Lambda (\partial _s^2 k )^2 + 2 \Lambda \partial _s^2 k \partial _s^3 k {\,\textrm{d} }s = \int _\gamma \partial _s( \Lambda (\partial _s^2 k )^2) {\,\textrm{d} }s= \Lambda (\partial _s^2 k )^2 \Big \vert _0 ^1 \,. \end{aligned}$$

Moreover, integrating by parts the other terms, we get

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s=&\int _\gamma -4(\partial _s^4 k)^2 - 2 \mu (\partial _s^3k)^2 + {\mathfrak {p}}_{10}^3 (k) + \mu {\mathfrak {p}}_8 ^2 (k) {\,\textrm{d} }s \nonumber \\ {}&+ \Lambda (\partial _s^2 k )^2 \Big \vert _0 ^1 +4 ( \partial _s^3 k \partial _s^4 k- \partial _s^2 k \partial _s^5 k )\Big \vert _0 ^1 -10 k^2 \partial _s^2 k \partial _s^3 k \Big \vert _0 ^1 \nonumber \\ {}&- 12 (\partial _sk )^3 \partial _s^2 k \Big \vert _0 ^1+ 2 \mu \partial _s^2 k \partial _s^3 k \Big \vert _0 ^1 \, . \end{aligned}$$
(4.4)

Using Navier boundary conditions, the boundary terms in equation (4.4) reduce to

$$\begin{aligned} \Lambda (\partial _s^2 k )^2 \Big \vert _0 ^1 +4 ( \partial _s^3 k \partial _s^4 k- \partial _s^2 k \partial _s^5 k )\Big \vert _0 ^1 - 12 (\partial _sk )^3 \partial _s^2 k \Big \vert _0 ^1+ 2 \mu \partial _s^2 k \partial _s^3 k \Big \vert _0 ^1 \,. \end{aligned}$$
(4.5)

We aim to lower the order of the second and third terms in (4.5). In particular, differentiating in time the condition \(k(y)=0\) using relation (2.13), we have

$$\begin{aligned} 4 \partial _s^3 k \partial _s^4 k= 2 \Lambda \partial _sk \partial _s^3 k+2 \mu \partial _s^2 k \partial _s^3 k \,. \end{aligned}$$
(4.6)

From conditions in (2.7), it follows

$$\begin{aligned} \partial _t\langle \gamma , 2 \partial _sk \nu - \mu \tau \rangle = \langle V \nu + \Lambda \tau , \partial _t(2 \partial _sk \nu - \mu \tau )\rangle =0\,, \end{aligned}$$

then, computing the scalar production using (4.2), we obtain

$$\begin{aligned} 0=&-2 \partial _t\partial _sk V + 2 \Lambda \partial _sk \partial _sV + \mu V \partial _sV \nonumber \\=&4 \partial _t\partial _sk \partial _s^2 k + 2 \Lambda \partial _sk (-2 \partial _s^3 k + \mu \partial _sk) -2\mu \partial _s^2 k (-2 \partial _s^3 k + \mu \partial _sk) \nonumber \\=&4 \partial _s\partial _tk \partial _s^2 k- 4 \partial _s\Lambda \partial _sk \partial _s^2 k + 2 \Lambda \partial _sk (-2 \partial _s^3 k + \mu \partial _sk) \nonumber \\ {}&-2\mu \partial _s^2 k (-2 \partial _s^3 k + \mu \partial _sk) \nonumber \\=&-8 \partial _s^2 k \partial _s^5k - 24 (\partial _sk)^3 \partial _s^2 k + 4\partial _s\Lambda \partial _sk \partial _s^2 k+ 4\Lambda (\partial _s^2 k)^2 \nonumber \\ {}&+4\mu \partial _s^2 k \partial _s^3 k- 4 \partial _s\Lambda \partial _sk \partial _s^2 k \nonumber \\ {}&+ 2 \Lambda \partial _sk (-2 \partial _s^3 k + \mu \partial _sk) -2\mu \partial _s^2 k (-2 \partial _s^3 k + \mu \partial _sk) \nonumber \\=&-8 \partial _s^2 k \partial _s^5k - 24 (\partial _sk)^3 \partial _s^2 k + 4\Lambda (\partial _s^2 k)^2 -4 \Lambda \partial _sk \partial _s^3 k \nonumber \\ {}&+8\mu \partial _s^2 k \partial _s^3 k + 2 \mu \Lambda (\partial _sk)^2 -2 \mu ^2 \partial _sk \partial _s^2 k\, , \end{aligned}$$

that is,

$$\begin{aligned}{} & {} -4 \partial _s^2 k \partial _s^5k = 12 (\partial _sk)^3 \partial _s^2 k - 2\Lambda (\partial _s^2 k)^2\nonumber \\{} & {} +2\Lambda \partial _sk \partial _s^3 k -4\mu \partial _s^2 k \partial _s^3 k - \mu \Lambda (\partial _sk)^2 +\mu ^2 \partial _sk \partial _s^2 k\,. \end{aligned}$$
(4.7)

Hence, replacing the terms (4.6) and (4.7) in (4.5), we obtain

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s=&\int _\gamma -4(\partial _s^4 k)^2 - 2 \mu (\partial _s^3k)^2 + {\mathfrak {p}}_{10}^3 (k) + \mu {\mathfrak {p}}_8 ^2 (k) {\,\textrm{d} }s \nonumber \\ {}&-\Lambda (\partial _s^2 k )^2 \Big \vert _0 ^1 + 4 \Lambda \partial _sk \partial _s^3 k \Big \vert _0 ^1 - \mu \Lambda (\partial _sk)^2 \Big \vert _0 ^1 + \mu ^2 \partial _sk \partial _s^2 k\Big \vert _0 ^1\, . \end{aligned}$$

We now recall that \(\Lambda \) is proportional to \(\partial _s^2k\) at boundary points (see Lemma 4.5), hence it follows that \(\Lambda {\mathfrak {p}}_\sigma ^h(k)={\mathfrak {p}}_{\sigma +3}^{\max \{2, h\}}(k)\). Thus, we have

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s =&-4\Vert \partial _s^4 k\Vert _{L^2(\gamma )}^2 - 2 \mu \Vert \partial _s^3k\Vert ^2_{L^2(\gamma )} + \int _\gamma {\mathfrak {p}}_{10}^3 (k) + \mu {\mathfrak {p}}_8 ^2 (k) {\,\textrm{d} }s \nonumber \\ {}&+ {\mathfrak {p}}_9^3(k) \Big \vert _0 ^1 + \mu {\mathfrak {p}}_7^3(k) \Big \vert _0 ^1 + \mu ^2 {\mathfrak {p}}_5^2(k)\Big \vert _0 ^1\, . \end{aligned}$$

By means of Lemma 4.6 and Lemma 4.7 in [32], for any \(\varepsilon >0\) we have

$$\begin{aligned} \int _{\gamma }^{} |\mathfrak {p}_{10}^{3}\left( k\right) |{\,\textrm{d} }s \le&\varepsilon \Vert \partial _s^{4}k\Vert _{L^2}^2+C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{\Theta _1}_{L^2}\right) \,,\nonumber \\ \int _{\gamma }^{}|\mathfrak {p}_{8}^2\left( k\right) |{\,\textrm{d} }s \le&\varepsilon \Vert \partial _s^{3}k\Vert _{L^2}^2 +C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+C\Vert k\Vert ^{\Theta _2}_{L^2}\right) \,,\nonumber \\ {\mathfrak {p}}_{9}^{3}(k)(y)\vert \le&\varepsilon \Vert \partial _s^{4}k\Vert _{L^2}^2+C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{\Theta _3}_{L^2}\right) \,,\nonumber \\ {\mathfrak {p}}_{7}^{3}(k)(y) \le&\varepsilon \Vert \partial _s^{4}k\Vert _{L^2}^2 +C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+C\Vert k\Vert ^{\Theta _4}_{L^2}\right) \,, \nonumber \\ {\mathfrak {p}}_{5}^{2}(k)(y) \le&\varepsilon \Vert \partial _s^{3}k\Vert _{L^2}^2 +C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+C\Vert k\Vert ^{\Theta _5}_{L^2}\right) \, , \end{aligned}$$

for some exponents \(\Theta _i >2\) with \(i = 1, \dots ,5.\)

Hence, we get

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s \le -C\left( \Vert \partial _s^4 k\Vert _{L^2(\gamma )}^2 + \mu \Vert \partial _s^3k\Vert ^2_{L^2(\gamma )}\right) +C \left( \Vert k^2 \Vert ^2_{L^2(\gamma )}+ \Vert k^2 \Vert ^\Theta _{L^2(\gamma )} \right) \end{aligned}$$

for some exponent \(\Theta >2\) and constant C which depend on \(\ell (\gamma )\). Using the energy monotonicity proved in Proposition 2.8, we conclude that

$$\begin{aligned} \dfrac{d}{dt} \int _\gamma \vert \partial _s^2 k\vert ^2{\,\textrm{d} }s \le C(\mathcal {E}(\gamma _0))\,. \end{aligned}$$

\(\square \)

4.2 Bound on \(\Vert {\partial _s^6 k}\Vert _{L^2}\)

We observe that since (2.10) is a parabolic fourth-order equation, after having controlled the second-order derivative of the curvature, it is natural to control the sixth-order derivative of the curvature. Then, using interpolation inequalities, we get estimates for all the intermediate orders. Before doing that, we notice that the elastic flow of curves becomes instantaneously smooth. More precisely, following the proof presented in [32] in the case of closed curves (both using the so-called Angenent’s parameter trick [5, 6, 11] and the classical theory of linear parabolic equations [46]), one can show that given a solution to the elastic flow in a time interval [0, T], then it is smooth for positive times, in the sense that it admits a \(C^\infty \)-parametrization in the interval \([\varepsilon ,T]\) for every \(\varepsilon \in (0,T)\).

From now on, we denote by

$$\begin{aligned} v:=\partial _t\gamma = V \nu + \Lambda \tau \end{aligned}$$
(4.8)

the velocity of \(\gamma \). Hence, by means of integration by parts and the commutation rule in Lemma 2.7, we get the following identity

$$\begin{aligned} \dfrac{d}{dt} \dfrac{1}{2} \int _\gamma \vert \partial _t^\perp v \vert ^2 {\,\textrm{d} }s =&- 2\int _\gamma \vert (\partial _s^\perp )^2 (\partial _t^\perp v) \vert ^2 {\,\textrm{d} }s \dfrac{1}{2} \nonumber \\&\int _\gamma \vert \partial _t^\perp v \vert ^2 (\partial _s\Lambda -kV) {\,\textrm{d} }s+ \int _\gamma \langle Y , \partial _t^\perp v \rangle {\,\textrm{d} }s \nonumber \\&- 2\langle \partial _t^\perp v ,(\partial _s^\perp )^3 (\partial _t^\perp v)\rangle \Big \vert _0^1 + 2\langle \partial _s^\perp (\partial _t^\perp v) , (\partial _s^\perp )^2 (\partial _t^\perp v) \rangle \Big \vert _0^1 \, , \end{aligned}$$
(4.9)

where we denoted by

$$\begin{aligned} Y:= \partial _t^\perp (\partial _t^\perp v) + 2 (\partial _s^\perp )^4 (\partial _t^ \perp v)\,. \end{aligned}$$

Before proceeding, we prove the following lemma, which gives estimates for some special family of polynomials.

Lemma 4.7

Let \(\gamma :[0,1]\rightarrow {{{\mathbb {R}}}}^2\) be a smooth regular curve. For all \(j \le 7 \), if the polynomial \({\mathfrak {p}}_{\sigma (j)}^j (k)\) defined as in Definition 4.1 satisfies one of the following conditions:

  1. (i)

    \(\sigma (j) \ge 2(l+1)\) for all \(l \le j\),

  2. (ii)

    \(\sigma (j) \ge 2(l+1)\) for all \(l \le j-1\) and \((j+1) \le \sigma (j) < 2(j+1)\),

and

$$\begin{aligned} \sigma (j) - \sum _{l=0}^j \alpha _l < 15\,, \end{aligned}$$
(4.10)

then, there exists a constant C and an exponent \(\Theta >2\) such that

$$\begin{aligned} \int _{\gamma }^{} |\mathfrak {p}_{\sigma (j)}^{j}\left( k\right) |{\,\textrm{d} }s \le \varepsilon \Vert \partial _s^{8}k\Vert _{L^2}^2+C(j,\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{\Theta }_{L^2}\right) . \end{aligned}$$

Similarly, for all \(j \le 7 \) and

$$\begin{aligned} \sigma '(j) - \sum _{l=0}^j \alpha _l < 16\,, \end{aligned}$$

there exists a constant C and an exponent \(\Theta '>2\) such that for \(y \in \{0,1\}\) it holds

$$\begin{aligned} \vert {\mathfrak {p}}_{\sigma '(j)}^{j}(k)(y)\vert \le \varepsilon \Vert \partial _s^{8}k\Vert _{L^2}^2+C(j,\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{\Theta '}_{L^2}\right) \,. \end{aligned}$$

Proof

By definition, every monomial of \({\mathfrak {p}}_{\sigma (j)}^{j}(k)\) is of the form \(C\prod _{l=0}^{j} (\partial _s^lk)^{\alpha _l}\) with

$$\begin{aligned} \alpha _l\in {\mathbb {N}} \qquad \text {and} \qquad \sum _{l=0}^{j}\alpha _l(l+1)=\sigma (j)\,. \end{aligned}$$

We set

$$\begin{aligned} \beta _l:=\frac{\sigma (j)}{(l+1)\alpha _l} \end{aligned}$$

for every \(l \le j\) and we take \(\beta _l=0\) if \(\alpha _l=0\). We observe that \(\sum _{l\in J}^{}\frac{1}{\beta _l}=1\), hence by Hölder inequality, we get

$$\begin{aligned} C\int _{\gamma }\prod _{l=0}^{j} \vert \partial _s^lk\vert ^{\alpha _l}\,\textrm{d}s \le C\prod _{l=0}^{j}\left( \int _{\gamma }\vert \partial _s^lk\vert ^{\alpha _l\beta _l}\,\textrm{d}s\right) ^{\frac{1}{\beta _l}} =C\prod _{l=0}^{j}\Vert \partial _s^lk\Vert ^{\alpha _l}_{L^{\alpha _l\beta _l}}. \end{aligned}$$

If condition (i) holds, then \(\alpha _l\beta _l \ge 2\) for every \(l\in J\). Applying the Gagliardo-Nirenberg inequality (see [3] or [7], for instance) for every \(l \le j\) yields

$$\begin{aligned} \Vert \partial _s^lk\Vert _{L^{\alpha _l\beta _l}}\le C(l,j,\alpha _l,\beta _l, \ell (\gamma ))\Vert \partial _s^{8}k\Vert _{L^2}^{\eta _l}\Vert k\Vert _{L^2}^{1-\eta _l}+\Vert k \Vert _{L^2}\, \end{aligned}$$

where the coefficient \(\eta _l\) is given by

$$\begin{aligned} \eta _l=\frac{l+1/2-1/(\alpha _l\beta _l)}{8} \in \left[ \frac{l}{8},1\right) . \end{aligned}$$
(4.11)

Then, we have

$$\begin{aligned} C\int _{\gamma }^{}\prod _{l=0}^{j}\vert \partial _s^lk\vert ^{\alpha _l}\,\textrm{d}s&\le C\prod _{l=0 }^{j}\Vert \partial _s^lk\Vert ^{\alpha _l}_{L^{\alpha _l\beta _l}}\nonumber \\&\le C\prod _{l=0}^{j}\Vert k\Vert ^{(1-\eta _l)\alpha _l}_{L^2}\left( \Vert \partial _s^{8}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{\eta _l\alpha _l}_{L^2}\nonumber \\ =&C\Vert k\Vert ^{\sum _{l=0}^{j}(1-\eta _l)\alpha _l}_{L^2}\left( \Vert \partial _s^{8}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{\sum _{l=0}^{j}\eta _l\alpha _l}_{L^2}. \end{aligned}$$

Moreover, from condition (4.10), we have

$$\begin{aligned} \sum _{l=0}^j \eta _l\alpha _l \le \frac{\sigma (j)-1- \sum _{l=0}^j \alpha _l}{8} < 2, \end{aligned}$$

that is, by means of Young’s inequality with \(p=\frac{2}{\sum _{l=0}^j\eta _l\alpha _l}\) and \(q=\frac{2}{2-\sum _{l=0}^{j}\eta _l\alpha _l}\) we obtain

$$\begin{aligned} C\int _{\gamma }\prod _{l=0}^{j}\vert \partial _s^lk\vert ^{\alpha _l}\,\textrm{d}s \le \varepsilon C \left( \Vert \partial _s^{8}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{2}_{L^2}+\frac{C}{\varepsilon }\Vert k\Vert ^{\Theta }_{L^2} \end{aligned}$$
(4.12)

where constant C depends on j, \(\varepsilon \), \(\ell (\gamma )\) and \(\Theta >2\).

Otherwise, if condition (ii) holds, we have \(1\le \alpha _j\beta _j<2\), that is

$$\begin{aligned} \Vert \partial _s^j k \Vert _{L^{\alpha _j \beta _j}}^{\alpha _j} \le \Vert \partial _s^j k \Vert _{L^2}^{\alpha _j} \le \Vert \partial _s^{8}k\Vert ^{\eta _j\alpha _j}_{L^2} \Vert k\Vert ^{(1-\eta _j)\alpha _j}_{L^2}+\Vert k\Vert ^{\alpha _j}_{L^2} \end{aligned}$$

where \(\eta _j= \frac{j}{8}\) and we used the boundedness of \(\ell (\gamma )\).

Hence, as in the previous case, we have

$$\begin{aligned} C\int _{\gamma }\prod _{l=0}^{j}\vert \partial _s^lk\vert ^{\alpha _l}\,\textrm{d}s \le&C \prod _{l=0}^{j-1} \Vert \partial _s^l k \Vert _{L^{\alpha _l \beta _l}}^{\alpha _l} \Vert \partial _s^j k \Vert _{L^{\alpha _j \beta _j}}^{\alpha _j} \\ \le&C \left( \Vert \partial _s^{8}k\Vert ^{\sum _{l=0}^{j-1}\eta _l\alpha _l}_{L^2} \Vert k\Vert ^{\sum _{l=0}^{j-1}(1-\eta _l)\alpha _l}_{L^2}+\Vert k\Vert _{L^2}^{\sum _{l=0}^{j-1}{\alpha _l}} \right) \\ {}&\left( \Vert \partial _s^{8}k\Vert ^{\eta _j\alpha _j}_{L^2} \Vert k\Vert ^{(1-\eta _j)\alpha _j}_{L^2}+\Vert k\Vert ^{\alpha _j}_{L^2} \right) \\ \le&C \Vert k\Vert _{L^2}^{\sum _{l=0}^j(1-\eta _l)\alpha _l } \left( \Vert \partial _s^{8}k\Vert ^{\sum _{l=0}^{j}\eta _l\alpha _l}_{L^2}+\Vert \partial _s^{8}k\Vert ^{\sum _{l=0}^{j-1}\eta _l\alpha _l}_{L^2} \Vert k\Vert ^{\eta _j\alpha _j}_{L^2} \right. \\ {}&\left. + \Vert \partial _s^{8}k\Vert ^{\eta _j\alpha _j}_{L^2} \Vert k\Vert _{L^2}^{\sum _{l=0}^{j-1}{\eta _l \alpha _l}}+ \Vert k\Vert _{L^2}^{\sum _{l=0}^{j}{\eta _l \alpha _l}} \right) \end{aligned}$$

where for all \(l\le j-1\) the coefficient \(\eta _l\) is given by expression (4.11) and \(\eta _j= \frac{j}{8}\). Applying again Young’s inequality, since \(\sum _{l=0}^{j}\eta _l\alpha _l<2\) still holds, we obtain estimate (4.12).

The second part of the lemma comes using the same arguments. \(\square \)

Lemma 4.8

Let \(\gamma _t\) be a maximal solution to the elastic flow of curves subjected to Navier boundary conditions (2.7), such that the uniform non-degeneracy condition (4.4) holds in the maximal time interval \([0,T_{\max })\). Then, for every \(j \in {\mathbb {N}}\), it holds

$$\begin{aligned} \partial _s^j \Lambda (y){\mathfrak {p}}_\sigma ^h(k)(y) = {\mathfrak {p}}_{\sigma + j+3}^{\max \{h, j+2\}}(k)(y) \end{aligned}$$

and

$$\begin{aligned} \partial _t\Lambda (y){\mathfrak {p}}_\sigma ^h(k)(y) = {\mathfrak {p}}^{\max \{h,6\}}_{\sigma +7}(k)(y)+ \mu {\mathfrak {p}}^{\max \{h,4\}}_{\sigma +5}(k)(y) \end{aligned}$$

for every \(t \in [0,T_{\max })\) and \(y \in \{0,1\}\).

Proof

By means of Lemma 4.5 and by the fact that \(\tau _2\) is bounded from below, we have

$$\begin{aligned} \Lambda (y) = \partial _s^2 k (y) \dfrac{\nu _2(y)}{\tau _2(y)} = {\mathfrak {p}}^2_3(k)(y) \end{aligned}$$

for \(y \in \{0,1\}\). Hence, by Remark 4.2, it follows

$$\begin{aligned} \partial _s^j \Lambda (y) = {\mathfrak {p}}^{j+2}_{j+3}(k)(y)\,, \end{aligned}$$

and thus,

$$\begin{aligned} \partial _s^j \Lambda (y) {\mathfrak {p}}_\sigma ^h(k)(y)= {\mathfrak {p}}^{j+2}_{j+3}(k)(y){\mathfrak {p}}_\sigma ^h(k)(y)= {\mathfrak {p}}^{\max \{h,j+2\}}_{\sigma +j+3}(k)(y)\,. \end{aligned}$$

Similarly, by formula (4.1), we have

$$\begin{aligned} \partial _t\Lambda (y)= \partial _t\Big ({\mathfrak {p}}_3^2(k)(y)\Big )= {\mathfrak {p}}_7^6 (k)(y)+ \mu {\mathfrak {p}}_5^4 (k)(y) \end{aligned}$$

then,

$$\begin{aligned} \partial _t\Lambda (y) {\mathfrak {p}}_\sigma ^h(k)(y)= {\mathfrak {p}}^{\max \{h,6\}}_{\sigma +7}(k)(y)+ \mu {\mathfrak {p}}^{\max \{h,4\}}_{\sigma +5}(k)(y)\,. \end{aligned}$$

\(\square \)

From now on, for any \(t \in [0,T_{\max })\), we choose the tangential velocity \(\Lambda (t,x)\) with \(x \in (0,1)\) as the linear interpolation between the value at the boundary points, that is

$$\begin{aligned} \Lambda (t,x)= \Lambda (t,0) \Big ( 1 + \frac{\Lambda (t,1)-\Lambda (t,0)}{\Lambda (t,0)} \frac{1}{\ell (\gamma )} \int _0^x \vert \partial _x \gamma \vert \, dx \Big ). \end{aligned}$$
(4.13)

Lemma 4.9

Let \(\Lambda \) be the tangential velocity defined in (4.13), there exist two constants \(C_1=C_1(\ell (\gamma ))\) and \(C_2=C_2(\mathcal {E}(\gamma _0),\ell (\gamma ))\) such that

$$\begin{aligned} \vert \partial _s\Lambda (t,x)\vert&\le C_1(\vert \Lambda (t,1) \vert + \vert \Lambda (t,0) \vert ) \, , \nonumber \\ \vert \partial _t\Lambda (t,x)\vert&\le C_2 \Big [ \vert \partial _t\Lambda (t,0)\vert +\vert \partial _t\Lambda (t,1)\vert + \vert \partial _t\Lambda (t,0)\vert \frac{\vert \Lambda (t,1)\vert }{\vert \Lambda (t,0)\vert } \nonumber \\&+ \vert \Lambda (t,1)-\Lambda (t,0)\vert ^2+ \vert \Lambda (t,1)-\Lambda (t,0)\vert \Big ] \end{aligned}$$

for \(t \in [0,T_{\max })\) and \(x \in [0,1]\).

Proof

From (4.13) it easily follows that

$$\begin{aligned} \partial _s\Lambda (t,x) = \frac{\Lambda (t,1)-\Lambda (t,0)}{\ell (\gamma )} \qquad \text {and} \qquad \partial _s^j \Lambda (t,x) =0 \quad \text {for} j \ge 2. \end{aligned}$$

Moreover, taking the time derivative, we get

$$\begin{aligned} \partial _t\Lambda (t,x)=\,&\partial _t\Lambda (t,0) \Big ( 1 + \frac{\Lambda (t,1)-\Lambda (t,0)}{\Lambda (t,0)} \frac{1}{\ell (\gamma )} \int _0^x \vert \partial _x \gamma \vert \, dx \Big ) \\ {}&+ \frac{(\partial _t\Lambda (t,1)-\partial _t\Lambda (t,0)) \Lambda (t,0)-(\Lambda (t,1)-\Lambda (t,0)) \partial _t\Lambda (t,0) }{\Lambda (t,0)}\\&\quad \frac{1}{\ell (\gamma )} \int _0^x \vert \partial _x \gamma \vert \, dx \\ {}&- \big ( \Lambda (t,1)-\Lambda (t,0) \big )\frac{1}{\ell ^2(\gamma )} \frac{d (\ell (\gamma ))}{dt}\int _0^x \vert \partial _x \gamma \vert \, dx \\ {}&+ \big ( \Lambda (t,1)-\Lambda (t,0)\big )\frac{1}{\ell (\gamma )} \frac{d }{dt}\int _0^x \vert \partial _x \gamma \vert \, dx \\=\,&\partial _t\Lambda (t,0) \Big ( 1 + \frac{\Lambda (t,1)-\Lambda (t,0)}{\Lambda (t,0)} \frac{1}{\ell (\gamma )} \int _0^x \vert \partial _x \gamma \vert \, dx \Big ) \\ {}&+ \frac{(\partial _t\Lambda (t,1)-\partial _t\Lambda (t,0)) \Lambda (t,0)-(\Lambda (t,1)-\Lambda (t,0)) \partial _t\Lambda (t,0) }{\Lambda (t,0)}\\&\quad \frac{1}{\ell (\gamma )} \int _0^x \vert \partial _x \gamma \vert \, dx \\ {}&- \frac{\big ( \Lambda (t,1)-\Lambda (t,0) \big )^2}{\ell ^2(\gamma )}\int _0^x \vert \partial _x \gamma \vert \, dx + \frac{ \Lambda (t,1)-\Lambda (t,0) }{\ell ^2(\gamma )}\\&\quad \int _\gamma k V {\,\textrm{d} }s \int _0^x \vert \partial _x \gamma \vert \, dx \\&+ \big ( \Lambda (t,1)-\Lambda (t,0)\big )\frac{1}{\ell (\gamma )} \frac{d }{dt}\int _0^x \vert \partial _x \gamma \vert \, dx \, . \end{aligned}$$

where we used relations (2.11). Hence, noticing that from interpolation and Proposition 4.6 it follows

$$\begin{aligned} \int _\gamma kV \le C(\mathcal {E}(\gamma _0))\,, \end{aligned}$$

we obtain the last estimate in the statement. \(\square \)

Lemma 4.10

If \(\gamma \) satisfies (2.10), then

$$\begin{aligned} \partial _t^2 \gamma =&\Big ( 4 \partial _s^6 k +10 k^2 \partial _s^4 k + {\mathfrak {p}}_7^3(k) -4 \Lambda \partial _s^3 k -6 \Lambda k^2 \partial _sk + \Lambda ^2 k \nonumber \\ {}&- 4 \mu \partial _s^4 k + \mu {\mathfrak {p}}_5^2(k)+ 2 \mu \Lambda \partial _sk + \mu ^2 \partial _s^2 k + \mu ^2 {\mathfrak {p}}_3^0(k) \Big ) \nu \nonumber \\ {}&+ \Big (\partial _t\Lambda + {\mathfrak {p}}_7^3(k)-2 \Lambda k \partial _s^3 k -3 \Lambda k^3 \partial _sk + \mu {\mathfrak {p}}_5^3(k)+\mu \Lambda k \partial _sk+ \mu ^2 {\mathfrak {p}}_3^1(k)\Big ) \tau \, . \end{aligned}$$
(4.14)

Proof

We firstly compute

$$\begin{aligned}{} & {} \partial _tV = 4 \partial _s^6 k + {\mathfrak {p}}_7^3(k)-2 \Lambda \partial _s^3 k + \Lambda {\mathfrak {p}}_4^1(k) - 4 \mu \partial _s^4 k\\{} & {} + \mu {\mathfrak {p}}_5^2(k) + \mu \Lambda \partial _sk + \mu ^2 \partial _s^2 k + \mu ^2 {\mathfrak {p}}_3^0(k) \end{aligned}$$

and

$$\begin{aligned} \partial _t\partial _sV =&4 \partial _s^7 k + {\mathfrak {p}}_8^4(k) -2 \Lambda \partial _s^4 k + \Lambda {\mathfrak {p}}_5^2(k)+ \partial _s\Lambda {\mathfrak {p}}_4^1(k)-4 \mu \partial _s^5 k + \mu {\mathfrak {p}}_6^3(k) \\ {}&+ \mu \Lambda \partial _s^2 k +3 \mu \partial _s\Lambda \partial _sk + \mu ^2 \partial _s^3 k + \mu ^2 {\mathfrak {p}}_4^1(k)\, . \end{aligned}$$

Then, by means of Lemma 2.7, we have

$$\begin{aligned} \partial _t^2 \tau =&( \partial _t\partial _sV + \partial _t\Lambda k + \Lambda \partial _tk)\nu - (\partial _sV + \Lambda k)^2 \tau \nonumber \\=&\Big (4 \partial _s^7 k + {\mathfrak {p}}_8^4(k) -4 \Lambda \partial _s^4 k + \Lambda {\mathfrak {p}}_5^2(k)+ \partial _s\Lambda {\mathfrak {p}}_4^1(k)\nonumber \\&+\Lambda ^2\partial _sk+\partial _t\Lambda k-4 \mu \partial _s^5 k + \mu {\mathfrak {p}}_6^3(k) \nonumber \\&+ \mu \Lambda {\mathfrak {p}}_3^2 (k) +3 \mu \partial _s\Lambda \partial _sk + \mu ^2 \partial _s^3 k + \mu ^2 {\mathfrak {p}}_4^1(k)\Big ) \nu \nonumber \\ {}&+ \Big ( {\mathfrak {p}}_8^3(k) + \Lambda {\mathfrak {p}}_5^3(k)+ \Lambda ^2 k^2 + \mu {\mathfrak {p}}_6^3(k)+ \mu \Lambda {\mathfrak {p}}_3^2(k) + \mu ^2 {\mathfrak {p}}_4^1(k)\Big ) \tau \, . \end{aligned}$$
(4.15)

Similarly, differentiating in time the relation (2.12) we get

$$\begin{aligned} \partial _t^2 \nu = -(\partial _sV + \Lambda k)^2 \nu - (\partial _t\partial _sV + \partial _t\Lambda k + \Lambda \partial _tk) \tau \,, \end{aligned}$$

that is

$$\begin{aligned} \partial _t^2 \nu =&\Big ( {\mathfrak {p}}_8^3(k) + \Lambda {\mathfrak {p}}_5^3(k)+ \Lambda ^2 k^2 + \mu {\mathfrak {p}}_6^3(k)+\mu \Lambda {\mathfrak {p}}_3^2(k) + \mu ^2 {\mathfrak {p}}_4^1(k)\Big ) \nu \nonumber \\ {}&- \Big (4 \partial _s^7 k + {\mathfrak {p}}_8^4(k) -4 \Lambda \partial _s^4 k + \Lambda {\mathfrak {p}}_5^2(k)+ \partial _s\Lambda {\mathfrak {p}}_4^1(k)\nonumber \\&\quad +\Lambda ^2\partial _sk+\partial _t\Lambda k-4 \mu \partial _s^5 k + \mu {\mathfrak {p}}_6^3(k) \nonumber \\&+ \mu \Lambda {\mathfrak {p}}_3^2 (k) +3 \mu \partial _s\Lambda \partial _sk + \mu ^2 \partial _s^3 k + \mu ^2 {\mathfrak {p}}_4^1(k)\Big ) \tau \, . \end{aligned}$$
(4.16)

Using computations (4.15) and (4.16), we obtain

$$\begin{aligned} \partial _t^2 \gamma =&(\partial _tV + \Lambda (\partial _sV + \Lambda k ) ) \nu + (\partial _t\Lambda - V ( \partial _sV - \Lambda k)) \tau \\=&\Big (\partial _tV+ \Lambda (-2\partial _s^3 k -3 k^2 \partial _sk + \mu \partial _sk)+ \Lambda ^2 k\Big ) \nu \\ {}&+ \Big (\partial _t\Lambda - (-2 \partial _s^2 k - k^3 - \mu k)( -2 \partial _s^3 k -3k^3 \partial _sk - \mu \partial _sk - \Lambda k)\Big ) \tau \\ {=}&\Big ( 4 \partial _s^6 k +10 k^2 \partial _s^4 k + {\mathfrak {p}}_7^3(k) -4 \Lambda \partial _s^3 k -6 \Lambda k^2 \partial _sk + \Lambda ^2 k \\&- 4 \mu \partial _s^4 k + \mu {\mathfrak {p}}_5^2(k)+ 2 \mu \Lambda \partial _sk + \mu ^2 \partial _s^2 k + \mu ^2 {\mathfrak {p}}_3^0(k) \Big ) \nu \\ {}&+ \Big (\partial _t\Lambda + {\mathfrak {p}}_7^3(k)-2 \Lambda k \partial _s^3 k -3 \Lambda k^3 \partial _sk + \mu {\mathfrak {p}}_5^3(k)+\mu \Lambda k \partial _sk+ \mu ^2 {\mathfrak {p}}_3^1(k)\Big ) \tau \, . \end{aligned}$$

\(\square \)

In the following, we show that to estimate the \(L^2\)-norm of \(\partial _s^6 k\) it is enough to control the \(L^2\)-norm of \(\partial _t^\perp v\). Hence, we start writing the boundary terms in (4.9) using the curvature and its derivatives, lowering the order by means of the boundary condition.

Lemma 4.11

Let \(\gamma _t\) be a family of curves moving with velocity v defined in (4.8). Then,

$$\begin{aligned} \langle \partial _s^\perp (\partial _t^\perp v), (\partial _s^\perp )^2 (\partial _t^\perp v) \rangle = {\mathfrak {p}}_{17}^7 (k)+ {\mathfrak {p}}_{15}^7 (k) +{\mathfrak {p}}_{13}^7 (k)+ {\mathfrak {p}}_{11}^5 (k) + \mu ^4 {\mathfrak {p}}_{9}^4 (k) \,. \end{aligned}$$

Proof

By straightforward computations, we have that

$$\begin{aligned} \partial _s^\perp (\partial _t^\perp v) =\partial _s(\partial _tv^\perp ) \nu \,, \quad (\partial _s^\perp )^2 (\partial _t^\perp v) = \partial _s^2 (\partial _tv^\perp ) \nu \end{aligned}$$

where \(\partial _tv^\perp \) is the normal component of \(\partial _t^2 \gamma \), which is computed in (4.14). Hence, we compute

$$\begin{aligned} \partial _s(\partial _tv^\perp )=&4 \partial _s^7 k + {\mathfrak {p}}_8^4(k)-4 \Lambda \partial _s^4 k - 4 \partial _s\Lambda \partial _s^3 k + \Lambda ^2 \partial _sk -4 \mu \partial _s^5 k + \mu {\mathfrak {p}}_7^4(k) \nonumber \\ {}&+ 2 \mu \Lambda \partial _s^2 k +2 \mu \partial _s\Lambda \partial _sk+ \mu ^2 \partial _s^3 k+ \mu ^2{\mathfrak {p}}_4^1(k) \end{aligned}$$
(4.17)

and

$$\begin{aligned} \partial _s^2 (\partial _t^\perp v) =&\, {\mathfrak {p}}_9^5(k) + 4\Lambda \partial _s\Lambda \partial _sk + \Lambda {\mathfrak {p}}_6^2(k) -4 \partial _s^2 \Lambda \partial _s^3 k - 8 \partial _s\Lambda \partial _s^4 k - \partial _t\Lambda \partial _sk \nonumber \\&+\mu {\mathfrak {p}}_7^4(k) + \mu \Lambda {\mathfrak {p}}_4^1(k) +2 \mu \partial _s^2 \Lambda \partial _sk + 4 \mu \partial _s\Lambda \partial _s^2 k + \mu ^2 {\mathfrak {p}}_5^2(k)\, , \end{aligned}$$
(4.18)

where in relation (4.18) we used

$$\begin{aligned} 4 \partial _s^8 k =&\, {\mathfrak {p}}_9^5(k) + 4 \Lambda \partial _s^5 k + \Lambda {\mathfrak {p}}_6^2(k) - \Lambda ^2 \partial _s^2 k - \partial _t\Lambda \partial _sk \\ {}&+ 4 \mu \partial _s^6 k +\mu {\mathfrak {p}}_7^4(k) - 2 \mu \Lambda \partial _s^3 k + \mu \Lambda {\mathfrak {p}}_4^1(k) - \mu ^2 \partial _s^4 k + \mu ^2 {\mathfrak {p}}_5^2(k) \end{aligned}$$

since Navier boundary conditions hold. So, using expressions (4.17) and (4.18), replacing \(\Lambda \) and its derivatives by means of Lemma 4.8 and recalling that \(\mu >0\) is constant, we get

$$\begin{aligned} \langle \partial _s^\perp (\partial _t^\perp v), (\partial _s^\perp )^2 (\partial _t^\perp v) \rangle = {\mathfrak {p}}_{17}^7 (k)+ {\mathfrak {p}}_{15}^7 (k) +{\mathfrak {p}}_{13}^7 (k)+ {\mathfrak {p}}_{11}^5 (k) + \mu ^4 {\mathfrak {p}}_{9}^4 (k) \,. \end{aligned}$$

\(\square \)

Lemma 4.12

Let \(\gamma _t\) be a family of curves moving with velocity v defined in (4.8). Then,

$$\begin{aligned} \langle \partial _t^\perp v ,(\partial _s^\perp )^3 (\partial _t^\perp v)\rangle =&\langle \partial _t^\perp v , 4\partial _s^9 k \nu \rangle + \langle \partial _t^\perp v , (\partial _s^\perp )^3 (\partial _t^\perp v)-4\partial _s^9 k \nu \ \rangle \nonumber \\=&{\mathfrak {p}}_{17}^6 (k)+ {\mathfrak {p}}_{15}^7(k) + {\mathfrak {p}}_{13}^7(k) +{\mathfrak {p}}_{11}^7(k) + {\mathfrak {p}}_{9}^5(k) + {\mathfrak {p}}_7^2(k)\, . \end{aligned}$$

Proof

Let us analogously handle the other boundary term in (4.9). By standard computations, we have that

$$\begin{aligned} (\partial _s^3)^\perp (\partial _t^\perp v) = \partial _s^3 (\partial _tv^\perp ) \nu \end{aligned}$$

where

$$\begin{aligned} \partial _s^3 (\partial _tv^\perp )=&4 \partial _s^9 k+ {\mathfrak {p}}_{10}^6(k) -4 \Lambda \partial _s^6 k+ \partial _s\Lambda {\mathfrak {p}}_6^5(k)-12 \partial _s^2 \Lambda \partial _s^4 k - 4 \partial _s^3 \Lambda \partial _s^3 k \nonumber \\ {}&+ \Lambda ^2 \partial _s^3 k+ 6 \Lambda \partial _s\Lambda \partial _s^2 k+ 6\Lambda \partial _s^2 \Lambda \partial _sk - 4 \mu \partial _s^7 k + \mu {\mathfrak {p}}_8^5(k) \nonumber \\ {}&+\mu 2 \Lambda \partial _s^4 k+ 6 \mu \partial _s\Lambda \partial _s^3 k+ 6 \mu \partial _s^2 \Lambda \partial _s^2 k+ 2 \mu \partial _s^3 \Lambda \partial _sk + \mu ^2 \partial _s^5 k+ \mu ^2 {\mathfrak {p}}_6^3(k)\, . \end{aligned}$$

As above, we aim to write the ninth-order derivative as the sum of lower-order derivatives.

Hence, from condition (2.7), at boundary points it holds

$$\begin{aligned} \langle \partial _tv, \partial _t^2 (-2 \partial _sk \nu + \mu \tau )\rangle =0 , \end{aligned}$$
(4.19)

where

$$\begin{aligned} \partial _tv =&\partial _tV \nu + V \partial _t\nu + \partial _t\Lambda \tau + \Lambda \partial _t\tau \nonumber \\=&\Big (\partial _tV + \Lambda \partial _sV \Big ) \nu + (\partial _t\Lambda -V \partial _sV) \tau \nonumber \\=&\Big ( 4 \partial _s^6 k + {\mathfrak {p}}_7^3(k)-4 \Lambda \partial _s^3 k + \Lambda {\mathfrak {p}}_4^1(k) - 4 \mu \partial _s^4 k\nonumber \\&+ \mu {\mathfrak {p}}_5^2(k) + 2\mu \Lambda \partial _sk + \mu ^2 \partial _s^2 k + \mu ^2 {\mathfrak {p}}_3^0(k)\Big )\nu \nonumber \\&+ \Big (\partial _t\Lambda -4 \partial _s^2k \partial _s^3 k \Big ) \tau \end{aligned}$$
(4.20)

and

$$\begin{aligned} \partial _t^2 (-2 \partial _sk \nu + \mu \tau )=&-2 \partial _t^2 \partial _sk \nu -4 \partial _t\partial _sk \partial _t\nu -2 \partial _sk \partial _t^2 \nu + \mu \partial _t^2 \tau \\=&\Big (-2 \partial _t^2 \partial _sk-2 \partial _sk (\partial _t^2 \nu ) ^\perp + \mu (\partial _t^2 \tau )^\perp \Big ) \nu \\ {}&+ \Big (4 \partial _sV\partial _t\partial _sk -2 \partial _sk (\partial _t^2 \nu )^\top + \mu (\partial _t^2 \tau )^\top \Big ) \tau \, . \end{aligned}$$

Then, after computing \(\partial _t^2 \partial _sk\), we have

$$\begin{aligned} \partial _t^2 (-2 \partial _sk \nu + \mu \tau )=&\Big (-8 \partial _s^9 k+ {\mathfrak {p}}_{10}^6(k) +\Lambda {\mathfrak {p}}_7^6(k)+ \partial _s\Lambda {\mathfrak {p}}_6^2(k)+ \Lambda ^2 {\mathfrak {p}}_4^3(k) +(\partial _s\Lambda )^2 {\mathfrak {p}}_2^1(k) \nonumber \\&+\partial _t\Lambda {\mathfrak {p}}^2_3(k) + \mu {\mathfrak {p}}_8^7(k) + \mu \Lambda {\mathfrak {p}}_5^4(k) + \mu \partial _s\Lambda {\mathfrak {p}}_4^1(k) +\mu \Lambda ^2{\mathfrak {p}}^1_2(k) \nonumber \\&+\mu \partial _t\Lambda {\mathfrak {p}}_1^0(k) + \mu ^2 {\mathfrak {p}}_6^5(k)+ \mu ^2 \Lambda {\mathfrak {p}}_3^2 (k) + \mu ^2 \partial _s\Lambda {\mathfrak {p}}_2^1(k) + \mu ^3 {\mathfrak {p}}_4^3(k) \Big ) \nu \nonumber \\ {}&+ \Big ({\mathfrak {p}}_{10}^7 (k) +\Lambda {\mathfrak {p}}_7^4(k)+ \partial _s\Lambda {\mathfrak {p}}_6^1(k)+ \Lambda ^2 {\mathfrak {p}}_4^1(k)+ \partial _t\Lambda {\mathfrak {p}}_3^1(k) \nonumber \\&+\mu {\mathfrak {p}}_8^3(k) + \mu \Lambda {\mathfrak {p}}_5^3(k)+ \mu \partial _s\Lambda {\mathfrak {p}}_4^1(k) + \mu \Lambda ^2 {\mathfrak {p}}_2^0(k) \nonumber \\&+\mu ^2 {\mathfrak {p}}_6^3(k) + \mu ^2 \Lambda {\mathfrak {p}}_3^2(k)+ \mu ^3 {\mathfrak {p}}_4^1(k) \Big ) \tau \, . \end{aligned}$$
(4.21)

Replacing equations (4.20) and (4.21) in the scalar product (4.19) and recalling that at boundary points \(\Lambda \) and its derivatives can be approximated by suitable polynomials (as it is shown in Lemma 4.8), we get

$$\begin{aligned} \langle \partial _tv,8 \partial _s^9 k \nu \rangle = {\mathfrak {p}}_{17}^6 (k)+ {\mathfrak {p}}_{15}^7(k) + {\mathfrak {p}}_{13}^7(k) + {\mathfrak {p}}_{11}^7(k)+ {\mathfrak {p}}_{9}^5(k)+ {\mathfrak {p}}_7^2(k)\,. \end{aligned}$$

We now notice that

$$\begin{aligned} \langle \partial _tv, \partial _s^9 k \nu \rangle = \langle \partial _tv^\perp \nu + \partial _tv^\top \tau , \partial _s^9 k \nu \rangle =\langle \partial _t^\perp v, \partial _s^9 k \nu \rangle \,, \end{aligned}$$

hence, we have

$$\begin{aligned} \langle \partial _t^\perp v ,(\partial _s^\perp )^3 (\partial _t^\perp v)\rangle =&\langle \partial _t^\perp v , 4\partial _s^9 k \nu \rangle + \langle \partial _t^\perp v , (\partial _s^\perp )^3 (\partial _t^\perp v)-4\partial _s^9 k \nu \ \rangle \\=&{\mathfrak {p}}_{17}^6 (k)+ {\mathfrak {p}}_{15}^7(k) + {\mathfrak {p}}_{13}^7(k) +{\mathfrak {p}}_{11}^7(k) + {\mathfrak {p}}_{9}^5(k) + {\mathfrak {p}}_7^2(k)\, . \end{aligned}$$

\(\square \)

Proposition 4.13

Let \(\gamma _t\) be a maximal solution to the elastic flow of curves subjected to boundary conditions (2.7), with initial datum \(\gamma _0\) in the maximal time interval \([0,T_{\max })\). Then for all \(t\in (0,T_{\max })\) it holds

$$\begin{aligned} \int _{\gamma } \vert \partial _t^\perp v \vert ^2{\,\textrm{d} }s \le C ( \mathcal {E}(\gamma _0)) \,. \end{aligned}$$

Proof

The thesis follows once we estimate the quantities in (4.9). From equation (4.18), we have

$$\begin{aligned} - 2\int _\gamma \vert (\partial _s^\perp )^2 (\partial _t^\perp v) \vert ^2 {\,\textrm{d} }s&= - 2\int _\gamma \vert 4 \partial _s^8 k + {\mathfrak {p}}_{9}^5(k)+ \Lambda {\mathfrak {p}}_{6}^5 (k) + \partial _s\Lambda {\mathfrak {p}}_{5}^4 (k) \\ {}&\quad + \partial _s^2 \Lambda {\mathfrak {p}}_{4}^3 (k) + \Lambda ^2 {\mathfrak {p}}_{3}^2 (k)+ \Lambda \partial _s\Lambda {\mathfrak {p}}_{2}^1 (k) \\ {}&\quad + \mu {\mathfrak {p}}_{7}^6 (k) + \mu \Lambda {\mathfrak {p}}_{4}^3 (k)\\&\quad +\mu \partial _s\Lambda {\mathfrak {p}}_{3}^2 (k) + \mu ^2 {\mathfrak {p}}_{5}^4 (k) \vert ^2 {\,\textrm{d} }s\, . \end{aligned}$$

Hence, using the simple inequalities

$$\begin{aligned} \vert a+b \vert ^2&\le C \big ( \vert a \vert ^2 + \vert b \vert ^2 \big )\, , \\\vert a+b \vert ^2&\ge (1-\varepsilon ) \vert a \vert ^2 - C(\varepsilon ) \vert b \vert ^2 \end{aligned}$$

with \(\varepsilon =\dfrac{1}{2}\), we get

$$\begin{aligned} - 2\int _\gamma \vert (\partial _s^\perp )^2 (\partial _t^\perp v) \vert ^2 {\,\textrm{d} }s \le&- \int _\gamma \vert 4 \partial _s^8 k \vert ^2 + C \int _\gamma \vert {\mathfrak {p}}_{18}^5(k)+\Lambda ^2 {\mathfrak {p}}_{12}^5 (k) + (\partial _s\Lambda )^2 {\mathfrak {p}}_{10}^4 (k) \nonumber \\&+ (\partial _s^2 \Lambda )^2 {\mathfrak {p}}_{8}^3 (k) + \Lambda ^4 {\mathfrak {p}}_6^2(k) + \mu ^2 {\mathfrak {p}}_{14}^6 (k) \nonumber \\ {}&+\mu ^2 \Lambda ^2 {\mathfrak {p}}_{8}^3 (k) + \mu ^2 (\partial _s\Lambda )^2 {\mathfrak {p}}_6^2 (k) + \mu ^4 {\mathfrak {p}}_{10}^4(k) {\,\textrm{d} }s \nonumber \\ \le&- 16\int _\gamma \vert \partial _s^8 k \vert ^2 + \int _\gamma \vert {\mathfrak {p}}_{18}^5(k)+{\mathfrak {p}}_{16}^4 (k)\nonumber \\&+ {\mathfrak {p}}_{14}^6 (k)+ {\mathfrak {p}}_{12}^2(k) +{\mathfrak {p}}_{10}^4 (k) \vert {\,\textrm{d} }s \,, \end{aligned}$$
(4.22)

where we used the very expression of \(\Lambda \) in (4.13) and the estimates in Lemma 4.9.

Moreover, with same arguments, from equation (4.20) we get

$$\begin{aligned} \dfrac{1}{2} \int _\gamma \vert \partial _t^\perp v \vert ^2 (\partial _s\Lambda - kV ) {\,\textrm{d} }s =&\dfrac{1}{2} \int _\gamma \vert \partial _t^\perp v \vert ^2 (\partial _s\Lambda +2 k \partial _s^2 k + k^4 -\mu k^2 ) {\,\textrm{d} }s \nonumber \\=&\int _\gamma \vert {\mathfrak {p}}_{18}^6 (k)+{\mathfrak {p}}_{17}^6 (k) + {\mathfrak {p}}_{16}^6 (k)+ {\mathfrak {p}}_{15}^6 (k)+{\mathfrak {p}}_{14}^6 (k)+{\mathfrak {p}}_{13}^6 (k) \nonumber \\&+{\mathfrak {p}}_{12}^6 (k)+ {\mathfrak {p}}_{12}^2 (k)+{\mathfrak {p}}_{10}^4 (k)+{\mathfrak {p}}_{9}^2 (k)+{\mathfrak {p}}_8^2(k)\vert {\,\textrm{d} }s\, . \end{aligned}$$
(4.23)

We only need to compute the integral in (4.9) involving Y. By straightforward computation, we have

$$\begin{aligned} \partial _t(\partial _tv)^\perp =&-8 \partial _s^{10} k -20 k^2 \partial _s^8 k + {\mathfrak {p}}_{11}^7(k) + \Lambda {\mathfrak {p}}_8^7(k) + \Lambda ^2 {\mathfrak {p}}_5^4(k) + \partial _t\Lambda {\mathfrak {p}}_4^3(k) \\ {}&+ 12 \mu \partial _s^8 k+\mu {\mathfrak {p}}_9^6(k)+\mu \Lambda {\mathfrak {p}}_6^5(k)+\mu \Lambda ^2 {\mathfrak {p}}_3^2(k) + \mu \partial _t\Lambda {\mathfrak {p}}_2^1(k) + \mu ^2{\mathfrak {p}}_7^6(k) \\ {}&+ \mu ^2 \Lambda {\mathfrak {p}}_4^3(k)+ \mu ^3 {\mathfrak {p}}_5^4(k) \, , \end{aligned}$$

and

$$\begin{aligned} \partial _s^4 (\partial _tv) ^ \perp =&4 \partial _s^{10} k + {\mathfrak {p}}_{11}^7(k) + \Lambda {\mathfrak {p}}_8^7(k)+ \partial _s\Lambda {\mathfrak {p}}_7^6(k) + \partial _s^2 \Lambda {\mathfrak {p}}_6^5(k) + \partial _s^3 \Lambda {\mathfrak {p}}_5^4(k)+ \partial _s^4 \Lambda {\mathfrak {p}}_4^3(k) \\&-4 \mu \partial _s^8 k+\mu {\mathfrak {p}}_9^6(k)+\mu \Lambda {\mathfrak {p}}_6^5(k)+\mu \partial _s\Lambda {\mathfrak {p}}_5^4(k)+\mu \partial _s^2 \Lambda {\mathfrak {p}}_4^3(k) \\&+\mu \partial _s^3 \Lambda {\mathfrak {p}}_3^2(k) + \mu \partial _s^4 \Lambda {\mathfrak {p}}_2^1(k) + \mu ^2({\mathfrak {p}}_7^6(k) \, . \end{aligned}$$

Then, we get

$$\begin{aligned} Y=&\partial _t^\perp (\partial _t^\perp v)+2 (\partial _s^\perp )^4 (\partial _t^ \perp v) \\=&\Big ( -20 k^2 \partial _s^8 k + {\mathfrak {p}}_{11}^7(k) + \Lambda {\mathfrak {p}}_8^7(k) + \partial _s\Lambda {\mathfrak {p}}_7^6(k) + \Lambda ^2 {\mathfrak {p}}_5^4(k) + \partial _t\Lambda {\mathfrak {p}}_4^3(k) \\&+ 4 \mu \partial _s^8 k+\mu {\mathfrak {p}}_9^6(k)+\mu \Lambda {\mathfrak {p}}_6^5(k) +\mu \partial _s\Lambda {\mathfrak {p}}_5^4(k)+\mu \Lambda ^2 {\mathfrak {p}}_3^2(k) + \mu \partial _t\Lambda {\mathfrak {p}}_2^1(k) \\ {}&+ \mu ^2{\mathfrak {p}}_7^6(k)+ \mu ^2 \Lambda {\mathfrak {p}}_4^3(k)+ \mu ^3 {\mathfrak {p}}_5^4(k)\Big ) \nu \, . \end{aligned}$$

Hence, computing the scalar product \(\langle Y, \partial _t^\perp v \rangle \), using the well-known Peter–Paul inequality and integrating by parts the integral \(\int _\gamma \partial _s^6 k \partial _s^8 k {\,\textrm{d} }s\), we have

$$\begin{aligned} \int _\gamma \langle Y , \partial _t^\perp v \rangle {\,\textrm{d} }s \le&\frac{1}{2} \int _\gamma \vert \partial _s^8 k \vert ^2 {\,\textrm{d} }s -4 \mu \int _\gamma \vert \partial _s^7 k \vert ^2 {\,\textrm{d} }s + {\mathfrak {p}}_{15}^7(k) \Big \vert _0^1 \nonumber \\ {}&+\int _\gamma \vert {\mathfrak {p}}_{18}^7(k)+ {\mathfrak {p}}_{17}^6(k) + {\mathfrak {p}}_{18}^7(k) {\mathfrak {p}}_{16}^7(k)+{\mathfrak {p}}_{15}^6(k)+{\mathfrak {p}}_{14}^7(k) \nonumber \\&+{\mathfrak {p}}_{13}^6(k)+ {\mathfrak {p}}_{12}^6(k) + {\mathfrak {p}}_{11}^4(k)+ {\mathfrak {p}}_{10}^4(k)+{\mathfrak {p}}_{8}^4(k)+ {\mathfrak {p}}_{6}^2(k) \vert {\,\textrm{d} }s \, . \end{aligned}$$
(4.24)

where, as above, we estimated \(\Lambda \) and its derivatives by means of Lemma 4.9.

Moreover, using identities in Lemma 4.11 and Lemma 4.12, we end up with the following inequality

$$\begin{aligned} - 2\langle \partial _t^\perp v ,(\partial _s^\perp )^3 (\partial _t^\perp v)\rangle \Big \vert _0^1 + 2\langle \partial _s^\perp (\partial _t^\perp v) , (\partial _s^\perp )^2 (\partial _t^\perp v) \rangle \Big \vert _0^1 \le&\vert {\mathfrak {p}}_{17}^7(k)\vert + \vert {\mathfrak {p}}_{15}^7(k)\vert + \vert {\mathfrak {p}}_{13}^7(k)\vert \nonumber \\ {}&+ \vert {\mathfrak {p}}_{11}^7(k)\vert + \vert {\mathfrak {p}}_9^5(k)\vert + \vert {\mathfrak {p}}_7^2(k) \vert \, . \end{aligned}$$
(4.25)

Then, putting together inequalities (4.22), (4.23), (4.24) and (4.25), we get

$$\begin{aligned} \dfrac{d}{dt} \dfrac{1}{2} \int _\gamma \vert \partial _t^\perp v \vert ^2 {\,\textrm{d} }s&\le \int _\gamma \vert {\mathfrak {p}}_{18}^7(k)+ {\mathfrak {p}}_{17}^7(k)+{\mathfrak {p}}_{16}^7(k)+{\mathfrak {p}}_{14}^7(k)+ {\mathfrak {p}}_{15}^6(k) + {\mathfrak {p}}_{13}^6(k) + {\mathfrak {p}}_{12}^6(k) \\ {}&\quad + {\mathfrak {p}}_{11}^4(k)+ {\mathfrak {p}}_{10}^4(k) +{\mathfrak {p}}_8^4(k) + {\mathfrak {p}}_{9}^2(k)+ {\mathfrak {p}}_{6}^2(k) \vert {\,\textrm{d} }s \\ {}&\quad +\vert {\mathfrak {p}}_{17}^7(k)\vert + \mu \vert {\mathfrak {p}}_{15}^7(k)\vert + \mu ^2 \vert {\mathfrak {p}}_{13}^7(k)\vert \\&\quad + \mu ^3 \vert {\mathfrak {p}}_{11}^7(k)\vert + \mu ^4 \vert {\mathfrak {p}}_9^5(k)\vert + \mu ^5 \vert {\mathfrak {p}}_7^2(k) \vert \Big \vert _0^1\, . \end{aligned}$$

By means of Lemma 4.7, we have

$$\begin{aligned} \dfrac{d}{dt} \dfrac{1}{2} \int _\gamma \vert \partial _t^\perp v \vert ^2 {\,\textrm{d} }s \le&- C \Big (\Vert \partial _s^8 k \Vert ^2 _{L^2(\gamma )}+\mu \Vert \partial _s^7 k \Vert ^2 _{L^2(\gamma )} \Big ) + C\Big ( \Vert k \Vert _{L^2(\gamma )}^2 + \Vert k \Vert ^\Theta _{L^2(\gamma )} \Big ) \\ \le&C(\mathcal {E}(\gamma _0)) \end{aligned}$$

for some exponent \(\Theta >2\) and constant C which depends on \(\ell (\gamma )\).

Hence, by integrating, it follows

$$\begin{aligned} \int _\gamma \vert \partial _t^\perp v \vert ^2 {\,\textrm{d} }s \le C(\mathcal {E}(\gamma _0)) . \end{aligned}$$

\(\square \)

Proposition 4.14

Let \(\gamma _t\) be a maximal solution to the elastic flow of curves subjected to Navier boundary conditions with initial datum \(\gamma _0\), which satisfies the uniform non-degeneracy condition (4.4) in the maximal time interval \([0,T_{\max })\). Then, for all \(t\in (0,T_{\max })\) it holds

$$\begin{aligned} \int _{\gamma } \vert \partial _s^6 k\vert ^2{\,\textrm{d} }s \le C ( \mathcal {E}(\gamma _0)) \,. \end{aligned}$$

Proof

From formula (4.14) and Lemma 4.8, it follows

$$\begin{aligned} \partial _t^\perp v = \partial _tv ^\perp \nu = \Big (\partial _s^6k + {\mathfrak {p}}_7^4(k)+ \mu {\mathfrak {p}}_5^4 (k) + \mu ^2 {\mathfrak {p}}_3^2(k) \Big ) \nu \,. \end{aligned}$$

However, since we are assuming that \(\mu \) is constant, we simply have

$$\begin{aligned} \partial _s^6k =\partial _tv ^\perp + {\mathfrak {p}}_7^4(k)+ {\mathfrak {p}}_5^4 (k) + {\mathfrak {p}}_3^2(k) \end{aligned}$$

and by means of Peter–Paul inequality, we get

$$\begin{aligned}{} & {} \int _\gamma \vert \partial _s^6 k \vert ^2 {\,\textrm{d} }s \le \int _\gamma \vert \partial _tv ^\perp \vert ^2 {\,\textrm{d} }s\nonumber \\{} & {} + C \Bigg (\int _\gamma \vert {\mathfrak {p}}_7^4(k)\vert ^2 {\,\textrm{d} }s+\int _\gamma \vert {\mathfrak {p}}_5^4(k)\vert ^2 {\,\textrm{d} }s+\int _\gamma \vert {\mathfrak {p}}_3^2(k)\vert ^2 {\,\textrm{d} }s\Bigg ) \,. \end{aligned}$$
(4.26)

We now estimate separately the integrals involving the polynomials.

We start considering

$$\begin{aligned} \int _\gamma \vert {\mathfrak {p}}_7^4(k)\vert ^2 {\,\textrm{d} }s= \int _\gamma \Big \vert \prod _{l=0}^4 (\partial _s^l k )^{\alpha _l}\Big \vert ^2 {\,\textrm{d} }s \end{aligned}$$

where \(\alpha _l \in {\mathbb {N}}\) and \(\sum _{l=0}^4 (l+1)\alpha _l = 7\). So, by Hölder inequality, we get

$$\begin{aligned}{} & {} \int _\gamma \vert {\mathfrak {p}}_7^4(k)\vert ^2 {\,\textrm{d} }s= \int _\gamma \Big \vert \prod _{l=0}^4 (\partial _s^l k )^{\alpha _l}\Big \vert ^2 {\,\textrm{d} }s \le \prod _{l=0}^4 \Bigg (\int _\gamma \vert \partial _s^l k \vert ^{2\alpha _l \beta _l} {\,\textrm{d} }s \Bigg )^{\frac{1}{\beta _l}}=\prod _{l=0}^4 \Vert \partial _s^l k \Vert _{L^{2\alpha _l \beta _l}(\gamma )}^{2 \alpha _l} \end{aligned}$$

where \(\beta _l:= \frac{7}{(l+1)\alpha _l}>1\) if \(\alpha _l\ne 0\) (if \(\alpha _l=0\) we simply have the integral of a unitary function), which clearly satisfies

$$\begin{aligned} \sum _{l=0}^4 \frac{1}{\beta _l} = 1\,. \end{aligned}$$

Then, we estimate any of such products by the well-known interpolation inequalities (see [31], for instance),

$$\begin{aligned} \Vert \partial _s^l k \Vert _{L^{2\alpha _l \beta _l}(\gamma )} \le C \Vert \partial _s^6 k \Vert _{L^2(\gamma )}^{\sigma _l} \Vert k \Vert _{L^2(\gamma )}^{1-\sigma _l}+\Vert k \Vert _{L^2(\gamma )} \end{aligned}$$
(4.27)

for some constant C depending on \(\alpha _l, \beta _l\) and coefficient \(\sigma _l\) given by

$$\begin{aligned} \sigma _l= \frac{1}{6} \Big ( l - \frac{1}{2 \alpha _l \beta _l}+ \frac{1}{2}\Big ) \in \Big [\frac{l}{6}, 1\Big )\,. \end{aligned}$$

Moreover, we notice that

$$\begin{aligned} \sum _{l=0}^4 2 \alpha _l \sigma _l =&\sum _{l=0}^4\frac{1}{3} \Big ( \alpha _l (l+1) - \frac{1}{2 \beta _l}- \frac{\alpha _l }{2}\Big ) \nonumber \\=&\frac{7}{3}-\frac{1}{6}-\frac{\sum _{l=0}^4 \alpha _l}{6} <2 \, , \end{aligned}$$

where in the last inequality we use the fact that, since l, \(\alpha _l \) are respectively the order of derivations and the exponents of the derivative in \({\mathfrak {p}}_7^4(k)\), it follows

$$\begin{aligned} 1<\sum _{l=0}^4 \alpha _l \le 7 \,. \end{aligned}$$

Then, multiplying together inequalities (4.27) and applying the Young inequality, we have

$$\begin{aligned} \int _\gamma \vert {\mathfrak {p}}_7^4(k)\vert ^2 {\,\textrm{d} }s \le&\left( \Vert \partial _s^6 k \Vert _{L^2\gamma )}+\Vert k \Vert _{L^2(\gamma )}\right) ^{\sum _{l=0}^4 2 \alpha _l \sigma _l} \Vert k \Vert _{L^2(\gamma )}^{\sum _{l=0}^4 2 \alpha _l (1-\sigma _l)} \nonumber \\\le&\varepsilon \left( \Vert \partial _s^6 k \Vert _{L^2(\gamma )} +\Vert k \Vert _{L^2(\gamma )}\right) ^2+ C(\varepsilon ) \Vert k \Vert _{L^2(\gamma )}^{\Theta _1} \end{aligned}$$
(4.28)

for some exponent \(\Theta _1>2\).

Arguing in the same way, one can check that

$$\begin{aligned} \int _\gamma \vert {\mathfrak {p}}_5^4(k)\vert ^2 {\,\textrm{d} }s \le \varepsilon \left( \Vert \partial _s^6 k \Vert _{L^2(\gamma )} + \Vert k \Vert _{L^2(\gamma )}\right) ^2+ C(\varepsilon ) \Vert k \Vert _{L^2(\gamma )}^{\Theta _2} \end{aligned}$$
(4.29)

and

$$\begin{aligned} \int _\gamma \vert {\mathfrak {p}}_3^2(k)\vert ^2 {\,\textrm{d} }s \le \varepsilon \left( \Vert \partial _s^6 k \Vert _{L^2(\gamma )}+ \Vert k \Vert _{L^2(\gamma )} \right) ^2+ C(\varepsilon ) \Vert k \Vert _{L^2(\gamma )}^{\Theta _3} \end{aligned}$$
(4.30)

for some exponents \(\Theta _2,\Theta _3>2\).

Replacing the estimates (4.28), (4.29) and (4.30) in (4.26) and moving the small part of \(\Vert \partial _s^6 k \Vert ^2_{L^2(\gamma )} \) on the right-hand side, we have

$$\begin{aligned} \int _\gamma \vert \partial _s^6 k \vert ^2 {\,\textrm{d} }s \le \int _\gamma \vert \partial _tv ^\perp \vert ^2 {\,\textrm{d} }s + C (\Vert k \Vert ^2_{L^2(\gamma )} +\Vert k \Vert ^\Theta _{L^2(\gamma )}) \,, \end{aligned}$$
(4.31)

where, as above, \(\theta >2\). Then, we conclude using Proposition 4.13 and the energy monotonicity in Proposition 2.8. \(\square \)

5 Long-time existence

In the following, we adapt the proof of [32, Theorem 4.15] to our situation.

Theorem 5.1

Let \(\gamma _0\) be a geometrically admissible initial curve. Suppose that \(\gamma _t\) is a maximal solution to the elastic flow with initial datum \(\gamma _0\) in the maximal time interval \([0,T_{\max })\) with \(T_{\max } \in (0, \infty ) \cup \{\infty \}\). Then, up to reparametrization and translation of \(\gamma _t\), it follows

$$\begin{aligned} T_{\max }= \infty \end{aligned}$$

or at least one of the following holds

  • \(\liminf \ell (\gamma _t )\rightarrow 0\) as \( t \rightarrow T_{\max }\);

  • \(\liminf \tau _2 \rightarrow 0\) as \( t \rightarrow T_{\max }\) at boundary points.

Proof

Suppose by contradiction that the two assertions in the statement are not fulfilled and that \(T_{\max }\) is finite. So, in the whole time interval \([0,T_{\max })\) the length of the curves \(\gamma _t\) is uniformly bounded from below away from zero and the uniform condition (4.3) is satisfied. Moreover, since the energy (2.1) decreases in time, both the \(L^2\)-norm of the curvature and the length of \(\gamma \) are uniformly bounded from above. Let \(\varepsilon >0\) be fixed, by means of Proposition 4.6 and Proposition 4.13 we have that

$$\begin{aligned} \partial _s^2 k \in L^\infty ([0,T_{\max }); L^2) \qquad \text {and} \qquad \partial _s^6 k \in L^\infty ((\varepsilon ,T_{\max }); L^2)\,. \end{aligned}$$

Hence, using Gagliardo-Nirenberg inequality for all \(t\in [0,T_{\max })\) we get

$$\begin{aligned} \Vert \partial _s^{j}k\Vert _{L^2(\gamma )} \le C_1\Vert \partial _s^{6}k\Vert _{L^2(\gamma )}^\sigma \Vert k\Vert _{L^2(\gamma )}^{1-\sigma }+ C_2 \Vert k\Vert _{L^2(\gamma )}\le C(\mathcal {E}(\gamma _0)), \end{aligned}$$

for every integer \(j \le 6\), with constants independent on t and for suitable exponent \(\sigma \). Actually, by interpolation, we have

$$\begin{aligned} \partial _s^j k \in L^\infty ((\varepsilon ,T_{\max }); L^\infty ) \end{aligned}$$

for every integer \(j \le 5\). Reparametrizing the curve \(\gamma _t\) into \(\widetilde{\gamma }_t\) with the property \(\vert \partial _x\widetilde{\gamma }(x)\vert =\ell (\widetilde{\gamma })\) for every \(x\in [0,1]\) and for all \(t\in [0,T_{\max })\) and translating so that it remains in a ball \(B_R(0)\) for every time (since its length is uniformly bounded from above), we get

  • \( 0<c\le \sup _{t\in [0,T_{\max }), x\in [0,1]} \vert \partial _x\widetilde{\gamma }(t,x)\vert \le C<\infty \),

  • \(0<c\le \sup _{t\in [0,T_{\max }), x\in [0,1]} \vert \widetilde{\gamma }(t,x)\vert \le C<\infty \,.\)

Hence, \(\tau \in L^\infty ([0,T_{\max }); L^\infty )\) and \(\partial _x^j \widetilde{\gamma }\in L^{\infty }((\varepsilon ,T_{\max });L^\infty )\) for every integer \(j \le 7\). Then, from the observation above and the fact that \(\varvec{\kappa }= k \nu \), we get \(\partial _s^j \varvec{\kappa }\in L^{\infty }((\varepsilon ,T_{\max });L^\infty )\) for every integer \(j \le 5\) and \(\partial _s^6 \varvec{\kappa }\in L^{\infty }((\varepsilon ,T_{\max });L^2)\). Moreover, thanks to our choice of parametrization, we have

$$\begin{aligned} \varvec{\kappa }(x)= \frac{\partial _x^2 \widetilde{\gamma }(x) }{\ell (\widetilde{\gamma })^2} \qquad \text {and} \qquad \partial _s^j \varvec{\kappa }(x)=\frac{\partial _x^{j+2} \widetilde{\gamma }(x) }{\ell (\widetilde{\gamma })^{j+2}} \,. \end{aligned}$$

So, it follows that \(\partial _x^j \widetilde{\gamma }\in L^{\infty }((\varepsilon ,T_{\max });L^\infty )\) for every integer \(1 \le j \le 7\) and \(\partial _x^8 \widetilde{\gamma }\in L^{\infty }((\varepsilon ,T_{\max });L^2)\).

Then, by Ascoli-Arzelà Theorem, there exists a curve \(\gamma _{\max }\) such that

$$\begin{aligned} \lim _{t\nearrow T_{\max }} \partial _x^j \widetilde{\gamma }(x)=\partial _x^j \gamma _{\max }(x) \end{aligned}$$

for every integer \(j \le 6\). The curve \(\gamma _{\max }\) is an admissible initial curve, since by continuity of k and \(\partial _s^2 k\) it fulfills the system (2.7) and uniform condition (4.3) at boundary points. Then, there exists an elastic flow \({\overline{\gamma }}_t \in C^{\frac{4+\alpha }{4}, 4+ \alpha }\big ([T_{\max }, T_{\max } + \delta ) \times [0,1]; {{{\mathbb {R}}}}^2\big )\) with \(\delta >0\). We again reparametrize \({\overline{\gamma }}_t\) in \({\hat{\gamma }}_t\) with constant speed equal to length and we have

$$\begin{aligned} \lim _{t\searrow T_{\max }} \partial _x^j \hat{\gamma }(x)=\partial _x^j \gamma _{\max }(x) \end{aligned}$$

for every integer \(j \le 6\).

Then,

$$\begin{aligned} \lim _{t\nearrow T_{\max }} \partial _t\widetilde{\gamma }(t,x) =\lim _{t\searrow T_{\max }}\partial _t\hat{\gamma }(t,x). \end{aligned}$$

Thus, we found a solution to the elastic flow in \(C^{\frac{4+\alpha }{4}, 4+ \alpha }\big ([0, T_{\max } + \delta ) \times [0,1]; {{{\mathbb {R}}}}^2\big )\). This obviously contradicts the maximality of \(T_{\max }\). \(\square \)

We conclude by emphasizing that, even if those arguments and techniques have been already used in literature, all the previous works deal with closed curves (see for instance [19, 33, 44]) or open curves with fixed boundary points (see for instance [16, 29, 39, 40, 47]). So, all the complications that appear in this paper are due to the fact that we have partial conditions on the boundary points.