Abstract
We study explicit solutions to the 2 dimensional Euler equations in the Lagrangian framework. All known solutions have been of the separation of variables type, where time and space dependence are treated separately. The first such solutions were known already in the 19th century. We show that all the solutions known previously belong to two families of solutions and introduce three new families of solutions. It seems likely that these are all the solutions that are of the separation of variables type.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We will continue our analysis of explicit solutions of the incompressible Euler equations which was started in [18]. For a general overview of various aspects of Euler equations from the mathematical point of view we refer to the survey [12]. There are two ways to think about the Euler (and Navier–Stokes) equations: either one focuses on the velocity field or the fluid particles. The first approach is the Eulerian description and the second the Lagrangian description of the fluid flow. Below we will concentrate on the Lagrangian framework; for a physical treatment of this topic we refer to [8].
Since for nonlinear PDE it is typically very difficult to find any explicit solutions, it has been found convenient to relax the conditions in the Lagrangian description somewhat. So, instead of requiring that the determinant of the differential of the map from the Lagrangian to the Eulerian coordinates is one, we only demand that it is independent of time. Let us call this approach the quasi Lagrangian description of the flow. The goal was then to use this extra freedom to find more explicit solutions. Note that this quasi Lagrangian description still has the full information about the flow. Simply the coordinates that are used to describe the flow have no intrinsic physical meaning: they are just arbitrary, but convenient, coordinates.
The first explicit solutions of this type were already found in 19th century by Gerstner and Kirchhoff [14, 16]. Actually Kirchhoff’s solution is so simple that one can also explicitly compute the Eulerian description of the flow, but Gerstner’s solution is genuinely a quasi Lagrangian solution. These solutions were then used to analyze more complicated situations with perturbation techniques. Also, Gerstner’s solution has the remarkable property that it can be used to model the interface between two different fluids, like air and water.
Apparently no really new explicit solutions were found before the paper by Abrashkin and Yakubovich in 1984 [5]. These solutions were a generalization of both Kirchhoff’s and Gerstner’s solution. After this these types of solutions have been analyzed and generalized using harmonic maps, see for example [2, 6, 11] and the references therein. Also group theory has been used in the analysis of solutions [7]. The role of analytic functions has been quite strong in these constructions, which is in some sense natural since already in the 19th century it was noticed that analytic functions could be used to analyze certain two dimensional flow problems.
All explicit quasi Lagrangian solutions that were constructed turned out to be of the separation of variables type: the time dependence and spatial dependence could be treated separately. However, while for the spatial part one could find solutions using complex functions, there was no natural role for complex functions for the time dependent part. Also in [18] it was shown that also in the spatial domain the complex functions were not as essential as was previously thought.
Since complex functions were used in the description of the solutions, it was natural to also consider harmonic functions. In [18] we showed that if the map in the plane is both area preserving and harmonic, then it is necessarily affine. So a harmonic Lagrangian solution is like Kirchhoff’s solution. On the other hand, Gerstner’s solution is also harmonic, so that indeed by relaxing the conditions one obtains essentially new solutions with the quasi Lagrangian framework. However, harmonic functions are not really essential in the description of quasi Lagrangian solutions, as we will see below.
In the present article we do not use complex analysis at all. The reason is simple: complex analysis is not needed, and the analysis given below is quite naturally formulated in terms of real functions and real variables. The new families of quasi Lagrangian solutions given below come naturally from the systematic analysis of the problem in the real domain. Indeed, the only reason we can think of why these families were not discovered previously is that their description using complex functions would be quite awkward. Also the harmonicity of functions plays no role in these new solutions and finally our analysis is local so the question if given maps are analytic or merely differentiable is irrelevant in the present context.
Since we are not using complex functions, it is not so easy to compare our solutions to the previously known cases. For example, if the reader takes a look at our formula (3.6) and compares it to the essentially equivalent formula (25) in [11, Theorem 3], then it is clear that the equivalence is not immediately obvious. Anyway it seems that all the previously known solutions reduce either to the situation described in Section 3.1 (this could be called the Kirchhoff type case) or to the family of solutions given in Theorem 5.1 (the Gerstner type case). Solutions of these types can be found using harmonic maps, and even though there have previously been hints that even more complicated solutions exist [6, 18], we show in this paper how they can all be reduced to these cases. Also, as far as we know, the families of solutions in Theorems 3.3, 5.7 and 5.9 are new, the first of which is a generalization of the Kirchhoff type. Thus we have four essentially different families of solutions, and apparently they give all the quasi Lagrangian solutions that are of the separation of variables type. We will not prove that there cannot be more solutions of this type but we discuss below why we think that the existence of essentially different solutions is unlikely.
One could also ask how big the families of solutions are. One way to measure this is to count the number of the arbitrary functions and constants in the general solution. Another physically interesting point of view is to ask if one can find a solution with prescribed vorticity. For all four families of solutions, we can compute a certain PDE, such that if the vorticity is a solution to this PDE, then there is an explicit solution with this vorticity. In one case the relevant PDE is obvious while in the remaining three cases we have used the algorithm rifsimp [19], which is based on the ideas of the formal theory of PDE [20].
The Lagrangian framework has been and is still being used in many different contexts. In addition to the bulk flow, an interesting aspect is to model the flow in the presence of an air/water interface. In some other applied problems the equations are not precisely the Euler equations; for example, in the large scale ocean current and meteorological problems it is important to take into account the Coriolis effect. Anyway we hope that our new solutions will be useful also in these more general problems. For various aspects of the applications of the Lagrangian point of view we refer to [1, 3, 4, 9, 10, 15, 17] and the many references therein.
The paper is organized as follows. In Sect. 2 we collect some necessary background material. In Sect. 3 we formulate the problem precisely and analyze the first family of solutions, of which the Kirchhoff type case is a special case. Then in Sects. 4 and 5 we show that there are three more families of solutions, one of which is the Gerstner family and the other two are new. Finally, in Sect. 6 we discuss to what extent one can prescribe the vorticity of the solutions.
2 Preliminaries
2.1 Notation
Let \(v=(v^1,\dots ,v^m)\,:\,{\mathbb {R}}^n\rightarrow {\mathbb {R}}^m\) be some map and \(\alpha \in {\mathbb {N}}^n\) a multiindex. For spatial derivatives we use the jet notation:
If v depends also on time we may use for the time derivative \(v_t\) or \(v'\), whichever is more convenient in a given formula. For functions a that depend only on time we always use \(a'\) for derivative.
In the analysis we will meet the Cauchy–Riemann equations in two different forms so to avoid confusion let us introduce the following terminology. Let \(v\,:\,{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) be some map and consider the following PDE
The left system will be called the CR system and the right system the anti CR system. The solutions to the left system are CR maps and to the right system anti CR maps. Let us also introduce the rotations and reflections
where \(\theta \) is a function of time.
The minors of various matrices appear frequently in the computations and so it is convenient to recall some facts about them. Let \(A\in {\mathbb {R}}^{2\times k}\) and let us denote the columns of A by \(A_j\); then the minors of A will be denoted by \(p_{ij}=\det (A_i,A_j)\). Also when \(v\,:\,{\mathbb {R}}^2\rightarrow {\mathbb {R}}^k\) is some map then the minors of its differential dv are denoted by
In the analysis below we will repeatedly use the following simple facts.
Lemma 2.1
Suppose that \(A_i\ne 0\) and \(p_{ij}=p_{ik}=0\); then also \(p_{jk}=0\). In addition
If \(\varphi =Av\) then we have the Cauchy–Binet formula
2.2 Overdetermined PDE
In some computations below we have used the algorithm rifsimp [19] which is implemented in Maple. The acronym rif means reduced involutive form and the word involutive refers to the fact that general systems of PDE can be transformed to an involutive form. For a comprehensive overview of overdetermined or general PDE systems we refer to [20].
An analogous situation arises in polynomial algebra [13]. A polynomial system generates an ideal, which in turn defines the corresponding variety. Now computing the Gröbner basis of the ideal gives a lot of information about the variety. Similarly the involutive form can reveal important information about the structure of the solution set. Intuitively one may think about computing the involutive form of a system of PDE like computing the Gröbner basis of an ideal.
2.3 Euler Equations
Let us consider the incompressible Euler equations
in some domain \(\Omega \subset {\mathbb {R}}^n\). This is called the Eulerian description of the flow and the coordinates of \(\Omega \), denoted x, are the Eulerian coordinates. Below we will consider another description which is almost the Lagrangian description of the flow.
Let \(D\subset {\mathbb {R}}^n\) be another domain and let us consider a family of diffeomorphisms \(\varphi ^t\,:\,D\rightarrow \Omega _t=\varphi ^t(D)\). The coordinates in D are denoted by z.Footnote 1 We can also define
Now given such \(\varphi \) we can define the associated vector field u by the formula
Our goal is to find maps \(\varphi \) such that u solves the Euler equations in the two dimensional case. To state the relevant conditions, let us introduce the following matrices:
Straightforward computations show (see for example [18] for details) that we get the following conditions.
Theorem 2.2
Let \(h=\det ( P_1)+\det (P_2)\) and let us suppose that the following conditions are satisfied:
Then u given by (2.2) is a solution to (2.1).
In this case the Lagrangian description of the flow is given by the map \(\Phi ^t=\varphi ^t\circ (\varphi ^0)^{-1}\). Note that without loss of generality we can suppose that \( \det (d\varphi )>0\). It is also interesting to formulate the above condition in terms of vorticity. Recall that in the x coordinates the vorticity \({\hat{\zeta }}=u^2_{10}-u^1_{01}\). Let us denote by \(\zeta \) the vorticity in the z coordinates, i.e. \(\zeta ={\hat{\zeta \circ \varphi ^t}}\). Recall that in 2 dimensions, if u is a solution to the Euler equations, then
In the z coordinates this simply means that \(\zeta _t=0\). But then again straightforward computations show that in fact
Hence the condition of the previous Theorem could also be formulated using the vorticity instead of h.
In what follows we will try to find the most general solution of the given form. Then it is important to remember that the domain D is simply some parameter domain which has no physical significance. Hence one can look for ”simplest” possible parameter domain. For future reference let us record this observation as
Lemma 2.3
Let \(\psi \,:\,{\hat{D}}\rightarrow D\) be an arbitrary diffeomorphism and let \({\tilde{\varphi }}^t=\varphi ^t\circ \psi \). Then \({\tilde{\varphi }}\) provides solutions to the Euler equations via formula (2.2) if and only if \(\varphi \) does.
Proof
This is because \(\det (d{\tilde{\varphi }})=\det (d\varphi )\det (\psi )\) and \({\tilde{\zeta }}=\zeta \). \(\square \)
3 General Formulation of the Problem
Let us consider the maps of the following form
where \(A(t)\in {\mathbb {R}}^{2\times k}\), \(v\,:\, D\rightarrow {\mathbb {R}}^k\) and \(D\subset {\mathbb {R}}^2\) is some coordinate domain. Since all the analysis is local, the precise nature of D is not important in our context. We will try to find maps \(\varphi \) such that the corresponding vector field u defined by the formula (2.2) is a solution to the Euler equations. Hence we should find A and v such that the conditions in Theorem 2.2 are satisfied. Since we want that \(\det (d\varphi )\ne 0\), this necessarily implies that \(\mathsf {rank}(A)=\mathsf {rank}(dv)=2\).
The strategy we use to tackle this problem is described now. Since \(\det (d\varphi )\) is independent of time, Lemma 2.1 implies that
Now, if we fix any t, we obtain from this formula a homogeneous linear equation for the minors of dv:
We also recall from [18] that if \(\varphi \) is given by (3.1) then
This condition also gives equations of form (3.3) when t is fixed. We conclude that for the most general solution we should look for solutions for which v satisfies a system of constraints of form (3.3).
By integrating (3.4) we obtain
The analysis of the time component of the solutions will be based on formulas (3.2) and (3.5).
If there are no spatial constraints then there are \(k(k-1)\) conditions for the 2k time components of A, since every \(p_{ij}\) and \(Q_{ij}\) in (3.2) and (3.5) has to be constant. Each spatial constraint of the form (3.3), however, decreases the number of time constraints by 2. On the other hand, we need to be able to choose at least two of the spatial variables arbitrarily because of Lemma 2.3 so we expect the number of spatial constraints to be at most \(k-2\). In this case there would be \(k^2-3k+4\) equations for the 2k functions. This means that for \(k\le 4\) we can expect to find solutions but for \(k>4\) we will obtain an overdetermined system. We will give a complete analysis of the cases \(k=2\), \(k=3\), and \(k=4\) in this paper. It appears that for \(k>4\) there really are no solutions but we could not find a sufficiently neat way to prove this.
In the analysis we often have situations where a certain case reduces to a case of smaller k. For future reference we record these simple observations.
Lemma 3.1
Let \(\varphi \) be as in (3.1).
-
1.
If some \(A_j\) is a constant linear combination of other columns then the problem reduces to a similar problem with smaller k.
-
2.
If some \(v^i\) is constant the problem reduces.
-
3.
If some \(v^i\) is a constant linear combination of other \(v^j\) then the problem reduces.
Proof
1. For example let us suppose that \(A_k=c_1A_1+\dots +c_{k-1}A_{k-1}\). Then we can set
Hence \(\varphi =Av={\tilde{A}}{\tilde{v}}\).
2. Let us then suppose that \(v^k=c=\) constant and let \({\tilde{v}}=(v^1,\dots ,v^{k-1})\). Then
But the conditions in Theorem 2.2 do not depend on the term \(cA_k\).
3. If \(v^k=c_1v^1+\dots +c_{k-1}v^{k-1}\). Then we can set
Hence \(\varphi =Av={\tilde{A}}{\tilde{v}}\). \(\square \)
Also with respect to time one has a simple invariance.
Lemma 3.2
Suppose that some \(\varphi =A(t)v(z)\) is a solution and let \({\tilde{\varphi }}=M(\theta )\varphi \) or \({\tilde{\varphi }}={\hat{M}}(\theta )\varphi \). Then \({\tilde{\varphi }}\) is a solution if and only if \(\theta =c_1t+c_0\) where \(c_j\) are constants.
Proof
This is a simple computation using the criteria of Theorem 2.2. \(\square \)
Hence, if convenient we can always rotate or reflect our solution with such a matrix. Note that the rotation adds a constant to the vorticity: if \({\tilde{\varphi }}=M(\theta _0t)\varphi \), then \({\tilde{\zeta }}=2\theta _0+\zeta \).
3.1 Case \(k=2\)
Let us briefly recall what happens when \(k=2\), see also [11, 18] for more details. Then according to Lemma 2.3 we can assume without loss of generality that \(v(z)=z\). In this case the coordinates z are in fact Lagrangian coordinates, and the corresponding vector field in Eulerian coordinates is given by
The conditions (3.2) and (3.5) are now
where e and c are constants. The solution can be written explicitly for example in the following way. We have \(\varphi =Az\) where
Here \(\theta \) and r are arbitrary functions of t. Note that this is a QR decomposition of the matrix A. We may take \(e=1\) without loss of generality, so that A is a curve in \(\mathbb {SL}(2)\).
Hence one can describe the degree of generality of the solution by saying that one can choose arbitrarily two functions of time. The solution set can also be given in a very different form using complex analysis, like in [11]. Note that there is no real choice for function v, one can say that it is uniquely defined in the sense of Lemma 2.3. In spite of the relative triviality of this case the well-known Kirchhoff solution is of this form [16].
3.2 Case \(k=3\)
Somewhat surprisingly and to the best of our knowledge this case has not been investigated before. Let us first state the main result, which turns out to be a generalization of the above case. To this end we first define the following matrices:
Here r, \(\theta \), \(a_1\) and \(a_2\) are functions of t.
Theorem 3.3
Let \(v=\big (z_1,z_2,f(z_2)\big )\) and \(\varphi =Av\) where A is as above. Then this gives a solution to Euler equations if
In this case
Proof
Using the criteria of Theorem 2.2 one easily verifies that this is a solution. \(\square \)
As mentioned, (3.6) is a special case of this Theorem, obtained by choosing \(f(z_2)=c\, z_2\). Note that here, too, we can actually achieve \(\det (d\varphi )=1\) so that the coordinates z are in fact real Lagrangian coordinates.
While it is easy to check that we indeed obtain a solution, it is not so easy to prove that this is essentially the most general solution of this form. Note that we have here two arbitrary functions of time, namely r and \(\theta \), like in the case \(k=2\). However, in addition we have one arbitrary function of one variable in z coordinates, namely f. However, there is no canonical form of the solution. For example one could take \({\tilde{v}}=\big (z_1,f(z_1),z_2\big )\). Then modifying the matrix A a little we can still get a solution, but the degree of generality remains the same.
Note that we can find a solution with prescribed vorticity in the sense that given any \(\zeta \) that depends only on \(z_2\) we can find the corresponding f by simple integration.
Let us show how to find the complete solution set. For \(\varphi \) to be a solution, the constraint equations
have to be independent of time.
Lemma 3.4
If there are no constraints for the spatial variables, then the problem reduces to the case \(k=2\).
Proof
Without loss of generality we may assume that \(p_{12}\ne 0\). Then we have
But if there are no constraints for the spatial variables then each \(p_{ij}\) must be constant and the problem reduces by Lemma 3.1. \(\square \)
If we have one constraint this can be put in a simpler form.
Lemma 3.5
If there is one constraint for the spatial variables, then without loss of generality we can assume that \(g_{23}=0\) and we can choose \(p_{12}=1\) and \(p_{13}=0\) in (3.7).
Proof
By renaming the variables if necessary we may write the constraint as \(\alpha _{12}g_{12} + \alpha _{13}g_{13} + g_{23} =0\). Let
and put \(\tilde{v}=Hv\); then we compute that \({\tilde{g}}_{23}=\alpha _{12}g_{12} + \alpha _{13}g_{13} + g_{23} \). Hence we may assume that \(g_{23}=0\) in (3.7) so that \(p_{12}=e_1\) and \(p_{13}=e_2\) where \(e_j\) are constants. By symmetry, we may assume that \(e_1\ne 0\) and by scaling we can make it equal to 1. Then let
Now \(\varphi = Av=\hat{A}\hat{v}\), where \(\hat{v}\) satisfies \(\hat{g}_{23}=0\) and \(\hat{A}\) satisfies \(\hat{p}_{12}-1=\hat{p}_{13}=0\). \(\square \)
Hence we expect that there can be only one constraint in the spatial domain.
Lemma 3.6
If there are two constraints for the spatial variables then either \(\det (d\varphi )=0\) or the problem reduces.
Proof
Lemma 3.1 implies that if \(\nabla v^j=0\) for some j the problem reduces, so we may suppose that \(\nabla v^j\ne 0\). We have seen that we can assume that one constraint is \(g_{23}=0\) and thus \(v^3=f(v^2)\) for some f. But then the other constraint is of the form
If \(g_{12}=0\) then \(\det (d\varphi )=0\) by Lemma 2.1. If \(c_1+c_2f'(v^2)=0\) then \(v^3=c\,v^2+d\) for some constants c and d and the problem reduces by Lemma 3.1. \(\square \)
Hence there can only be one constraint of the form (3.3).
Theorem 3.7
The most general solution is of the form given in Theorem 3.3.
Proof
We have seen that without loss of generality we may suppose that the constraint is \( g_{23}=0\). The solution to this equation is \(v_3={\tilde{f}}(v_2)\) for an arbitrary function \({\tilde{f}}\). By Lemma 2.3, we may thus assume that \({\tilde{v}}=\big (z_1,z_2,{\tilde{f}}(z_2)\big )\). Substituting \(g_{23}=0\) to (3.7) implies also that
is independent of time and so there are constants \(c_j\) such that
But this means that the matrices \((A_1,A_2)\) and \((A_1,A_3)\) must both be solutions to the \(2\times 2\) case. Hence by formula (3.6) we have
and the functions \(a_j\) are given by
But according to Lemma 3.5 we can as well choose \(e_1=1\) and \(e_2=0\). Moreover, by setting
we see that we can also choose \(c_1=0\) and \(c_2=1\). \(\square \)
Let us illustrate how a solution of this type might look like. Note that there cannot be any periodic solutions apart from those that can be obtained by considering the case \(k=2\). Since there is a lot of freedom in choosing the various functions, many different kinds of cases are possible. In particular the motion of a single particle can be quite complicated depending on the choice of the arbitrary functions. But there appears like a wavefront defined by \(A_1\): at each t, the points whose \(z_2\) coordinates are equal are all on the same line parallel to \(A_1\). In Fig. 1 we have chosen
4 Case \(k=4\), the Spatial Dependence
Now we consider solutions of the form (3.1) with \(k=4\). Let us start by reducing the spatial constraints into a simpler form. The constraints are again as in (3.3) and there are now six terms in the sum. Let us first state the main observations.
Theorem 4.1
If there are less than two constraints or more than two constraints, then either \(\mathsf {rank}(dv)<2\) or the problem reduces to the case \(k<4\).
Theorem 4.2
If there are two constraints, then without loss of generality we may assume that they are
The proof will be based on several Lemmas.
Lemma 4.3
Without loss of generality we may assume that one constraint is of the form
Proof
Let \({\tilde{v}}\) be a vector and let \({\tilde{g}}_{ij}\) be the corresponding minors of \(d{\tilde{v}}\). Then one constraint can be written as
Without loss of generality we may assume that \(\alpha _{24}=1\). Then let us introduce the following matrix
and let \({\tilde{v}}=H v\). Then the constraint becomes
\(\square \)
Lemma 4.4
If there is only one constraint, the problem reduces to the case \(k<4\).
Proof
If there is only one spatial constraint, by the above Lemma we may assume it to be \(g_{24}+c\,g_{13}=0\). Then the determinant conditions for A imply that the five expressions
must all be constant. However, the values of the minors must also satisfy the equation in Lemma 2.1. But then one column of A must be a constant linear combination of other columns and hence the problem reduces by Lemma 3.1. \(\square \)
Lemma 4.5
Without loss of generality we may assume that the two constraints are
Proof
By Lemma 4.3 we already know that one constraint can be written as \( g_{24}+c_0g_{13}=0\). Hence if the second constraint is \( g_{13}=0\) we have our first case.
Otherwise we can, without loss of generality, assume that the second constraint is of the form
Now let us set
Then we set \({\tilde{v}}=Hv\). With this substitution the first constraint is the same as before and the second is of the form \(g_{14}+c_1g_{13}+c_2g_{23}=0\) for some constants \(c_j\). \(\square \)
Note that the first case of Theorem 4.2 is the first case of Lemma 4.5, and the second case of Theorem 4.2 is obtained from the second case of Lemma 4.5 by choosing \(c_0=c_1=c_2=0\). Then we must analyze how to reduce
further when not all constants are zero.
Lemma 4.6
If not all \(c_j\) are zero, then without loss of generality we may assume that \(c_0\ne 0\) in Lemma 4.5.
Proof
Let us show that if \(c_0=0\) the problem reduces to the previously known cases.
The case \(c_0=c_1=0\) and \(c_2\ne 0\). Here we can swap \(v^1\) and \(v^2\) to obtain \(g_{14}=g_{24}+c_2 g_{13}=0\) and the system is in the desired form.
The case \(c_0=0\) and \(c_1\ne 0\). Let
and let \(\tilde{v}=Hv\). After this transformation we have \(g_{24}=g_{13}=0\), the first case in Theorem 4.2. \(\square \)
We are finally ready for the proof of Theorem 4.2.
Proof of Theorem 4.2
The first case of the classification in Theorem 4.2 is the first case of Lemma 4.5. The second and third case of the Theorem are obtained from the second case of Lemma 4.5 by choosing \(c_0=c_1=c_2=0\), or \(c_0=1\) and \(c_1=c_2=0\).
Let \({\tilde{v}}\) be our vector and let us denote the corresponding minors by \({\tilde{g}}_{ij}\). We have to show that in the remaining cases we obtain the fourth case or another known case. By Lemma 4.6 we may assume that \(c_0=1\) and hence we have to reduce constraints of the form
to a simpler form. Let
and \({\tilde{v}}=Hv\). Then
If the polynomial \(p=x^2-c_1x-c_2\) has distinct real roots, then choosing \(\beta _j\) to be these roots we obtain \(g_{24}=g_{13}=0\), the first case in Theorem 4.2. If there is a double root then choosing \(\beta _1=c_1/2\) we get \(g_{14}=0\), which leads to the third case. If the roots are complex we choose \(\beta _2=(2c_2+c_1\beta _1)/(2\beta _1-c_1)\) which leads to
Since \(4c_2+c_1^2<0\), we can further reduce this to \(g_{14}- g_{23}=0\) by scaling. \(\square \)
Now in Theorem 4.2 we have four PDE systems for the vector v. So the next task is to find the general solutions to these systems. However, one of the cases can be discarded.
Lemma 4.7
If \(g_{14}=g_{24}=0\) then the problem reduces to the case \(k<4\).
Proof
Renaming the variables we can write the system as \(g_{34}=g_{24}=0\). If \(\nabla v^4=0\) the problem reduces by Lemma 3.1. If \(\nabla v^4\ne 0\) then Lemma 2.1 implies that \(g_{23}=0\) so the conditions for A are
for some constants \(e_j\), \(c_j\). This means that \((A_1,A_2)\), \((A_1,A_3)\), and \((A_1,A_4)\) are all solutions to the \(2\times 2\) case. Hence according to formula (3.6) we can write \(A=M(\theta )R\), where
and \(a_j'=(2e_j\theta '-c_j)/r^2\). But then we can write \(a_j=e_jg_1(t)+c_jg_2(t)+d_j\) where \(g_j\) are some functions and \(d_j\) are constants. Replacing \(v^1\) by \(v^1+d_1v^2+d_2v^3+d_3v^4\) we may assume that \(d_j=0\). This implies that a constant linear combination of \(A_2\), \(A_3\), and \(A_4\) is zero, and thus the problem reduces by Lemma 3.1. \(\square \)
Let us then find the solutions in the remaining cases.
Theorem 4.8
In the relevant cases of Theorem 4.2 we have the following solutions where the functions \(f_j\) are arbitrary.
-
1.
If \(g_{13}=g_{24}=0\) then we can take
$$\begin{aligned} v=\big (z_1,z_2,f_1(z_1),f_2(z_2)\big ). \end{aligned}$$ -
2.
If \(g_{14}=g_{24}+g_{13}=0\) then we can take
$$\begin{aligned} v=\big (z_1,z_2,z_2f_1'(z_1)+f_2(z_1),f_1(z_1)\big ). \end{aligned}$$ -
3.
If \(g_{14}-g_{23}=g_{24}+g_{13}=0\) then we can take \(v=\big (z_1,z_2,v^3,v^4\big )\) where \(v^3\) and \(v^4\) satisfy the anti CR system.
Proof
In each of the three cases we must have \(g_{12}\ne 0\). Indeed, otherwise the equalities of Lemma 2.1, combined with the conditions of any of the three cases, imply that all the minors are zero and thus \(\det (d\varphi )\) is zero. Therefore by Lemma 2.3 we may choose a labelling with \(v^1=z_1\) and \(v^2=z_2\).
Now we prove each case of the Theorem:
Case 1. The general solution to \(g_{13}=g_{24}=0\) can be written as \(v^3=f_1(z_1)\) and \(v^4=f_2(z_2)\).
Case 2. The equation \(g_{14}=0\) implies that \(v^4=f_1(z_1)\) where \(f_1\) is arbitrary. Then the second equation is \(v^3_{01}=f'(z_1)\). Then we integrate to get the result.
Case 3. Simply substituting \(v=\big (z_1,z_2,v^3,v^4\big )\) we obtain the anti CR system. \(\square \)
It is difficult to show directly that these three cases are actually different, i.e. they cannot be reduced to each other. We will show this later in Lemma 5.10.
Then we should prove Theorem 4.1. We now already know that two constraints can be reduced to the cases in Theorem 4.8. Hence we should show that if we add further equations the problem reduces to the case \(k<4\).
Proof of Theorem 4.1
If there is only one constraint the problem reduces by Lemma 4.4. If there are three constraints we have the following cases.
Case 1 of Theorem 4.8. Without loss of generality we may suppose that the three constraints are
Since we know that \(v=\big (z_1,z_2,f_1(z_1),f_2(z_2)\big )\), then simply substituting this to the third equation gives
It is straightforward to check the solutions are affine and hence problem reduces.
Case 2 of Theorem 4.8. Consider the equations
We know that \(v=\big (z_1,z_2,z_2f_1'(z_1)+f_2(z_1),f_1(z_1)\big )\). Hence the third equation is
Again it is easy to check that the solutions are affine and the problem reduces.
Case 3 of Theorem 4.8. Now \(v=(z_1,z_2,v^3,v^4)\) where \((v^3,v^4)\) is an anti CR map. The first two constraints are thus the anti CR system and the third constraint can be written as
Using the anti CR system to eliminate \(v^4\) we thus obtain a system
Using rifsimp one easily verifies that the solutions are necessarily affine and thus the problem reduces. \(\square \)
4.1 Comparison of Cases 1 and 3
Let us point out a relationship between cases 1 and 3, which is in a way hidden in the formulation given. In case 3 we have thus \(v=\big (z_1,z_2,v^3,v^4\big )\) where \((v^3,v^4)\) is an anti CR map and in case 1 \(v=\big (z_1,z_2,f_1(z_1),f_2(z_2)\big )\) where \(f_j\) are arbitrary. But now recall that the general solution of the one dimensional wave equation \(u_{11}=0\) can be written as
So in a way case 3 is an elliptic case and case 1 is a hyperbolic case. In fact we could have used a different basic form in Theorem 4.8 to make the connection more explicit. Like in case 3 we have an anti CR system, in case 1 we could have used the coupled wave system
In this way \(v^3\) and \(v^4\) are both solutions to the wave equation \(u_{20}-u_{02}=0\). However, a simple change of variables leads to the form given in Theorem 4.8, which is more convenient to represent the solutions to Euler equations. Taking this point of view we thus obtain a new family of solutions, case 1, from the old one, case 3, by changing one sign in the anti CR system. We will see that this elliptic/hyperbolic character also shows up when we compute the corresponding vorticities below.
5 \(k=4\), the Time Dependence
Now we begin the analysis of the time component A in the three relevant cases shown in Theorem 4.8.
5.1 Case 3
In this case the spatial constraints are
and we have seen that we may take \(v=(z_1,z_2,f^1,f^2)\) for some anti CR map \(f=(f^1,f^2)\). This has already been studied previously [5, 6, 11, 18]. A famous example of this case is the Gerstner map [14]:
In this case we compute
In general we have the following result.
Theorem 5.1
Let f be any anti CR map such that \(1-|\nabla f^1|^2\ne 0\) in D. If
then \(\varphi \) gives a solution to Euler equations and in this case
Proof
This is again a simple computation using the criteria of Theorem 2.2. \(\square \)
Note that now we have practically no choices in the time domain, but in some sense more choices in the spatial domain than in the previous cases.
In Fig. 2 we have an example of this case with
where \(\theta _0\) is the coefficient of the implicit rotation matrix \(M(\theta _0 t)\) that we can premultiply the solution by, according to Lemma 3.2.
Before the proof that this is indeed the most general form of the solution let us make a few comments of the form of the solution. Previously solutions of this type have been given in different forms and so let us indicate what is the relationship between various formulations. Let \(w=(w^1,w^2)\) and \({\hat{w}}=({\hat{w}}^1,{\hat{w}}^2)\) and let \(\theta _0\) and \(\mu _0\) be some constants. Then one could look for the solutions of the form
As explained in [18] in the PDE system for w and \({\hat{w}}\), namely the system (5.1), one can for example give w arbitrarily and then solve the corresponding \({\hat{w}}\). This is a regular elliptic system for \({\hat{w}}\). Note that here (anti) CR maps play no role a priori. However, it has been known that if w is a CR map and \({\hat{w}}\) an anti CR map then this provides a solution to the equations (5.1) [4, 6]. In fact it seems that all the solutions that were known before [18] assumed the harmonicity of w and \({\hat{w}}\).
Anyway we have the following simple observation.
Lemma 5.2
If there is a solution of the form (5.3) then there is an anti CR map f such that \({\hat{w}}=f\circ w\).
Hence even if w and \({\hat{w}}\) are not (anti) CR maps they are connected by an anti CR map.
Proof
Without loss of generality we may suppose that \(\det (dw)\ne 0\). Hence there is some map f such that \({\hat{w}}=f\circ w\). Then substituting this to the system shows that f must be an anti CR map. \(\square \)
Now using w as new coordinates we obtain solutions which are as given in Theorem 5.1. Note that the form (5.3) can be very useful because it may be possible or more convenient to compute w and \({\hat{w}}\) directly, in which case typically f is not explicitly known.
Let us then turn to the proof that the most general solution is given by Theorem 5.1, taking into account Lemmas 2.3 and 3.2 as always. Using (5.1), the conditions of Theorem 2.2 give that
are constant w.r.t. time. Hence there are constants \(e_j\) and \(c_j\) such that
First we can reduce the problem to a simpler form.
Lemma 5.3
Without loss of generality we may assume that \(e_1\ne 0\).
Proof
Suppose that \(e_1=0\). Due to symmetry we only need to consider the case where also \(e_2=0\). Let \(v=H\tilde{v}\) where
Now \(\tilde{v}\) satisfies the equations (5.1) if and only if v satisfies them. Then let \(\tilde{A}=AH\); thus we can write \(\varphi =Av={\tilde{A}}{\tilde{v}}\). Then we obtain
Here \(e_3\) and \(e_4\) cannot both be zero because otherwise \(\det (d\varphi )=0\). Thus, after this transformation we have \(\tilde{e}_1=\tilde{p}_{12}\ne 0\) or \(\tilde{e}_2=\tilde{p}_{34}\ne 0\), and by symmetry we may assume the former. \(\square \)
Lemma 5.4
Without loss of generality we can choose \(e_1=e_2=1\) and \(e_3= e_4=0\) in (5.4).
Proof
By the previous Lemma we can assume that \(e_1\ne 0\) and by scaling we can assume that \(p_{12}=e_1=1\); hence \(B=\big (A_1,A_2\big )\in \mathbb {SL}(2)\). Let
Here \(\beta \) is some function. Then for A we have \( p_{13}-p_{24}=e_3\) and \( p_{14}+p_{23}=e_4 \). For \({\tilde{A}}\) we have \({\tilde{p}}_{13}- {\tilde{p}}_{24}={\tilde{p}}_{23}+ {\tilde{p}}_{14}=0 \) and \({\tilde{p}}_{34}=1\). Since \(\tilde{v}\) satisfies the equations (5.1) if and only if v satisfies them, we can write \(\varphi =Av={\tilde{A}}{\tilde{v}}\). \(\square \)
Theorem 5.5
When the spatial constraints are (5.1), then the most general solution is given by (5.2) .
Proof
By the previous Lemma we may suppose that
where \(B\in \mathbb {SL}(2)\). If \(B=M(\mu )\), we obtain immediately that \(\mu '\) and \(\beta '\) are constants and we get the required form using Lemma 3.2.
If B is not a rotation, it can be written as
where \(s \ne 0\), \(\mu \), and \(\theta \) are some functions. The conditions in the second row of (5.4) give the following equations:
Evidently \(\theta -\mu -\beta \) must be constant. It follows that clearly \(\beta '\) and s are constants, and further that \(\mu '\) and \(\theta '\) are constants. Hence we can write
where \(\mu _0\), \(\mu _1\), \(\theta _0\), \(\theta _1\), and \(\beta _0\) are constants. Let us set \(\beta _1=\theta _1-\mu _1\). Using Lemma 3.2 we can premultiply by the matrix \(M(-\mu _1t-\mu _0)\) so that without loss of generality we may assume that
Hence
Now it is straightforward to check that \((w,{\hat{w}})\) satisfies the system (5.1), and hence by Lemma 5.2 we may take w as new coordinates, which then gives the required form. \(\square \)
5.2 Case 1
Here the spatial constraints are \(g_{13}=g_{24}=0\) and the equations for the time component are
Thus we have constants \(e_j\), \(c_j\) such that
Note that by Lemma 2.1 we have \(p_{13}p_{24}=e_1e_2+e_3e_4=\) constant.
Lemma 5.6
We may choose \(e_1=1\) and \(e_3=e_4=0\) without loss of generality.
Proof
Due to symmetry, we may assume that \(p_{12}\ne 0\) and further that \(p_{12}=e_1=1\). Let \(\ell \) be some function and let
Then for A we have \( p_{23}=e_3\) and \(p_{14}=e_4 \) while for \({\tilde{A}}\) we have \({\tilde{p}}_{23}= {\tilde{p}}_{14}=0 \). Now if \({\tilde{v}}=Hv\) then we still have \({\tilde{g}}_{13}={\tilde{g}}_{24}=0\). Hence we can write \(\varphi =Av={\tilde{A}}{\tilde{v}}\). \(\square \)
Theorem 5.7
If v satisfies \(g_{13}=g_{24}=0\), then the most general solution \(\varphi =Av\) is given by
where c is a constant, and \(v=\big (z_1,z_2,f_1(z_1),f_2(z_2)\big )\), where \(1-f_1'f_2'\ne 0\) in D. In this case
Proof
By the previous Lemma we may assume that \( \det (A_1,A_2)=1 \), \(A_3=\ell A_2\) and \(A_4=A_1/\ell \). The conditions in the second row of (5.5) give the following conditions for A:
Hence \((\ell '/\ell )^2=\) constant and thus we can take \(\ell (t)=e^{2ct}\). Then we see that \(\langle A_1,A_2\rangle \) is constant and hence we may write
where \({\hat{A}}_j\) are constant vectors. Now we check that in fact \(\theta \) is a linear function so by Lemma 3.2 we can drop M. Then by a constant rotation we can assume that \({\hat{A}}_1=(r,0)\) so that at present we can write
where r and b are constants. We can still assume \(r=1\) and \(b=0\) by introducing
and setting \({\tilde{A}}=AH\) and \(v=H{\tilde{v}}\). \(\square \)
Now a single particle has a fairly straightforward trajectory: when the absolute value of t is large, then it is approaching the origin approximately along the line parallel to the vector \((f_2(z_2),z_2)\) if t is negative, or moving away from the origin approximately along the line parallel to the vector \((z_1,f_1(z_1))\) if t is positive. Thus its trajectory resembles a hyperbola. When t is negative, the particles are grouped according to the \(z_2\) coordinate, and when t is positive, they are grouped according to the \(z_1\) coordinate. Then, if we rotate the solution with \(M(\theta _0 t)\) the particles are also rotating around the origin as they move towards it or away from it. In Fig. 3 we have chosen \(c=1\), \(\theta _0=1/2\), \(f_1= 3\cos (3z_1)/(2+2z_1^2)\) and \(f_2 = -\sin (3z_2/2)/4+\sin (4z_2)/2\) for an example.
5.3 Case 2
Now we have the following equations for the time component:
Lemma 5.8
Without loss of generality we may take \(e_4=1\) and \(e_1=e_2=e_3=0\).
Proof
If \(e_4=0\), Lemma 2.1 implies that \(p_{13}\) and \(p_{24}\) are constants. Thus the problem reduces.
Hence we may assume that \(e_4=1\). Let \(\ell \) be some function and let
Then for A we have \(p_{12}=e_1\), \( p_{34}=e_2\) and \( p_{13}-p_{24}=e_3 \) while for \({\tilde{A}}\) we have \({\tilde{p}}_{12}= {\tilde{p}}_{34}={\tilde{p}}_{13}-{\tilde{p}}_{24}=0 \). Now if \({\tilde{v}}=Hv\) then we still have \({\tilde{g}}_{13}+{\tilde{g}}_{24}={\tilde{g}}_{14}=0\). Hence we can write \(\varphi =Av={\tilde{A}}{\tilde{v}}\). \(\square \)
Theorem 5.9
If v satisfies \( g_{13}+g_{24}= g_{14}=0\), then the solution \(\varphi =Av\) is given by
where \(v=\big (z_1,z_2,z_2f_1'(z_1)+f_2(z_1),f_1(z_1)\big )\) with \(-z_2f_1''-f_2'\ne 0\). Moreover
Proof
By the previous Lemma we may assume that \(\det (A_2,A_3)=1\), \(A_1=\ell A_2\) and \(A_4=\ell A_3\). Hence the conditions for A can be written as
Evidently \(|A_2|\), \(|A_3|\) and \(\ell '\) are constants. Then it is easy to compute that the solution is of the form
where we may assume \(b_1\ne 0\). Now the transformation \(\tilde{A}=AH\), \(v=H\tilde{v}\), where
preserves the spatial constraints and gives the desired form to the time component. \(\square \)
Except for the possible rotation of constant speed, the trajectory of each particle is a line segment parallel to the vector \((z_1,f_1(z_1))\). Figure 4 gives an example of this case, with \(f_1=\cos (z_1)\), \(f_2=z_1^2-20z_1\), and \(\theta _0=-1/40\).
Now that Theorems 5.1, 5.7, and 5.9 give the time component for each of the cases obtained in Theorem 4.8, it is easy to show that these three cases are inequivalent.
Lemma 5.10
The three cases of Theorem 4.8 cannot be reduced to each other by a linear transformation \(\tilde{v}=Hv\).
Proof
Let us prove that cases 1 and 2 are inequivalent. The proof for the rest of the pairs is similar. If cases 1 and 2 were equivalent, then there would be a solution \(\varphi = Av = \tilde{A}\tilde{v}\), where v is an instance of case 1 and \(\tilde{v}=Hv\) an instance of case 2. But then we would also have \(\tilde{A}=AH^{-1}\), where A is a solution to case 1 given by Theorem 5.7 and \(\tilde{A}\) a solution to case 2 given by Theorem 5.9, and there is clearly no matrix H that can satisfy this. \(\square \)
6 Vorticity
Let us finally say a few words about vorticity. Above we have computed some families of solutions and the corresponding vorticities. However, one could also ask if one can find a solution with a prescribed vorticity. Let us examine each of the relevant cases.
First let us consider the situation in Theorem 5.7. Our solution is \(\varphi =Av\) where
and the vorticity is given by
Lemma 6.1
If the vorticity is given by (6.1), then it is a solution to the following PDE:
Proof
Note that the equation (6.1) is not ”overdetermined” in the usual sense. However, the right hand side is of the separation of variables type, so the left hand side cannot be completely arbitrary. Giving this equation to rifsimp and specifying the elimination order that eliminates the functions \(f_j\) produces the given PDE. \(\square \)
Note that we can actually find one family of solutions to the vorticity equation:
Here \(d_j\) are constants. Of course this is not the general solution. Note also that the equation for vorticity is a kind of a nonlinear wave equation.
Let us then consider Theorem 5.1. Now we have
where f is an anti CR map and
Lemma 6.2
If the vorticity is given by (6.2), then
Proof
Since f is an anti CR map we have also \(\Delta f^1=0\), so again \(\zeta \) cannot be arbitrary. Using rifsimp to eliminate \(f^1\) we obtain the above PDE for \(\zeta \). \(\square \)
Again one can find a specific family of solutions:
In this case the vorticity equation is a nonlinear elliptic equation.
Finally we have the case of Theorem 5.9. Now \(\varphi =Av\) where
and
Lemma 6.3
If the vorticity is given by (6.3), then
where the functions \(g_j\) are arbitrary.
Proof
Eliminating \(f_1\) and \(f_2\) with rifsimp we obtain \( \zeta \zeta _{02}-2\zeta _{01}^2=0\), whose general solution is given above. \(\square \)
Notes
The coordinates z are sometimes called labels, and D is then the labelling domain.
References
Abrashkin, A.: Unsteady Gerstner waves. Chaos Solitons Fractals 118, 152–158 (2019)
Abrashkin, A.A.: Theory of interaction between two plane vortices in a perfect fluid. Fluid Dyn. 22(1), 53–59 (1987)
Abrashkin, A., Oshmarina, O.E.: Pressure induced breather overturning on deep water: exact solution. Phys. Lett. A 378, 2866–2871 (2014)
Abrashkin, A., Oshmarina, O.E.: Rogue wave formation under the action of quasi-stationary pressure. Commun. Nonlinear Sci. Numer. Simul. 34, 66–76 (2016)
Abrashkin, A.A., Yakubovich, E.I.: Two-dimensional vortex flows of an ideal fluid. Dokl. Akad. Nauk SSSR 276(1), 76–78 (1984)
Aleman, A., Constantin, A.: Harmonic maps and ideal fluid flows. Arch. Ration. Mech. Anal. 204(2), 479–513 (2012)
Andreev, V.K., Kaptsov, O.V., Pukhnachov, V.V., Rodionov, A.A.: Applications of group-theoretical methods in hydrodynamics. Mathematics and its Applications, vol. 450. Kluwer Academic Publishers, Dordrecht (1998)
Bennett, A.: Lagrangian fluid Dynamics. Cambridge Monographs on Mechanics. Cambridge University Press, Cambridge (2006)
Constantin, A.: Nonlinear water waves with applications to wave-current interactions and tsunamis, volume 81 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, (2011)
Constantin, A., Monismith, S.: Gerstner waves in the presence of mean currents and rotation. J. Fluid Mech. 820, 511–528 (2017)
Constantin, O., Martín, M.: A harmonic maps approach to fluid flows. Math. Ann. 369(1–2), 1–16 (2017)
Constantin, P.: On the Euler equations of incompressible fluids. Bull. Amer. Math. Soc. (N.S.) 44(4), 603–621 (2007)
Cox, D., Little, J., O’Shea, D.: Ideals, varieties, and algorithms. Undergraduate Texts in Mathematics. Springer, 4th edition, (2015)
Gerstner, F.: Theorie der Wellen samt einer daraus abgeleiteten Theorie der Deichprofile. Ann. Phys. 2, 412–445 (1809)
Henry, D.: An exact solution for equatorial geophysical water waves with an underlying current. Eur. J. Mech. B. Fluids 38, 18–21 (2013)
Kirchhoff, G.: Vorlesungen über matematische Physik. Mechanik Teubner. Teubner, Leipzig (1876)
Kluczek, M.: Exact Pollard-like internal water waves. J. Nonlinear Math. Phys. 26(1), 133–146 (2019)
Martín, M., Tuomela, J.: 2d incompressible euler equations: new explicit solutions. Discrete Contin. Dyn. Syst. A 39(8), 4547–4563 (2019)
Reid, G., Wittkopf, A., Boulton, A.: Reduction of systems of nonlinear partial differential equations to simplified involutive forms. Eur. J. Appl. Math. 7(6), 635–666 (1996)
Seiler, W.: Involution, volume 24 of Algorithms and Computation in Mathematics. Springer, Berlin, The formal theory of differential equations and its applications in computer algebra (2010)
Funding
Open access funding provided by University of Eastern Finland (UEF) including Kuopio University Hospital.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Communicated by A. Constantin.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The first author was supported by the North Karelia Regional Fund of Finnish Cultural Foundation.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Saleva, T., Tuomela, J. On the Explicit Solutions of Separation of Variables Type for the Incompressible 2D Euler Equations. J. Math. Fluid Mech. 23, 39 (2021). https://doi.org/10.1007/s00021-020-00538-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s00021-020-00538-y