Explicit solutions to the 3D incompressible Euler equations in Lagrangian formulation

We introduce many families of explicit solutions to the three dimensional incompressible Euler equations for nonviscous fluid flows using the Lagrangian framework. Almost no exact Lagrangian solutions exist in the literature prior to this study. We search for solutions where the time component and the spatial component are separated, applying the same ideas we used previously in the two dimensional case. We show a general method to derive separate constraint equations for the spatial component and the time component. Using this provides us with a plenty of solution sets exhibiting several different types of fluid behaviour, but since they are computationally heavy to analyze, we have to restrict deeper analysis to the most interesting cases only. It is also possible and perhaps even probable that there exist more solutions of the separation of variables type beyond what we have found.


Introduction
This is the third paper, following [13,15], in our research of finding explicit solutions to the incompressible Euler equations in the Lagrangian framework. The Lagrangian framework is one of the two main approaches to describing fluid motion. In the Lagrangian representation the flow is described by giving the trajectories of individual particles. The other main approach is the Eulerian framework which considers the velocity field or the vorticity field of the flow. For general background on the Euler equations we refer to [8], and for the Lagrangian formulation in particular we refer to [5]. Lagrangian formulation has been applied to flow problems with group theory in [4].
In our two previous articles we studied the two-dimensional case, finding several solution sets, and in the present paper we apply similar ideas to the three-dimensional case. The goal is to find solutions of the separation of variables type, where space and time coordinates are treated separately. The first explicit solutions of this type were already found in 19th century by Gerstner and Kirchhoff in the 2D case [10,11]. In the recent decades these solutions have been generalized using harmonic maps, see for example [1,2,3,7]. All these solutions were of the form where the time component and the spatial component could be separated, and finally in [15] we systematically found probably all possible solutions of the separation of variables type to the 2D Euler equations in Lagrangian formulation, including ones that could not be found with harmonic maps.
However, it seems that there are surprisingly few articles dealing with the 3D case, the only exception we found is [18]. After all, many scientists have studied this problem for a long time in the two-dimensional context and it would seem natural to try to find similar solutions also in three dimensions which is of course physically the most relevant case. Perhaps one reason is that since classically complex functions were used to analyze the two-dimensional case, the generalization to the three-dimensional case appeared difficult.
In [18] the problem is approached in a slightly nonconventional variant of the Lagrangian formulation, not describing the system for the particle trajectory map ϕ but rather for its differential dϕ. For the spatial part they in fact used the complex formulation for two of the components and a real function for the third component. To find exact solutions, they considered cases where the time component and the spatial component are separated for dϕ. A simple integration argument shows that then ϕ is also of the separation of variables form and thus these solutions can also be found with the technique presented in this paper.
Anyway in [13,15] we showed that even in two dimensions the situation is best analyzed using real functions, and thus a priori it seemed natural to believe that similar techniques should be applicable also in three dimensional case. In the present paper we show that this is indeed the case: even though certain classes of solutions can equivalently be analyzed with complex formulation, one can find many more large classes of solutions operating with real functions. In fact it is not possible to describe all solutions: there are simply so many possibilities that one cannot treat them all in a short article like this. Also it is not possible to describe all computations in detail. Some intermediate formulas and expressions are simply too big to be written down. However, this is not a problem in the sense that the verification of the final results is a straightforward computation. Of course this computation would be very tedious if done by hand.
Perhaps one should also clarify what we mean by the word "explicit" in the title. In some cases the solution set cannot really be described by completely explicit formulas. In these cases one may think that the word "explicit" refers to the explicit description of the structure of the solutions. However, we think that these structural descriptions can also be very useful in applications: combining numerical and symbolic computation one can use them to "build" specific solutions with the desired properties. Also one can then analyze what kind of properties are actually possible for a given family of solutions.
In Section 2 we introduce some useful notions and results which are needed later. In Section 3 we formulate our framework precisely. Then in Sections 4 to 8 we present our solutions. We start with simple cases which can be described more thoroughly; these cases show the reader the main ideas in the computations. In later sections many details must be suppressed, but the computations are really very similar to simpler cases. One does not need any new methods to deal with the complicated cases, the difficulty comes just from the sheer size of the problem. Unlike in the 2D case we studied in [15], here we will not consider all possible families of solutions due to their vast number. Instead, we concentrate on those cases which to us seem to be the most promising from the physical point of view. However, we also indicate some systems which we know lead to nontrivial explicit solutions but which are not analyzed in the present paper. It is also highly possible that there are cases that we did not discover altogether.

Euler parameters
We need to represent rotations in R 3 . There are several ways to do this but Euler parameters are most convenient for our purposes. Let a = (a 0 , a 1 , a 2 , a 3 ) and let us define the following matrices: If |a| = 1 then R a ∈ SO(3) andK a , K a ∈ SO(4). Since R a = R −a the sphere S 3 is the double cover of SO(3).

PDE
Let u and v be functions of two variables; there are two natural conventions regarding the sign in the Cauchy-Riemann equations: It turns out that for us it is more convenient to use the second system. We will call this the anti CR system.
For general PDE systems it is more convenient to use multiindices to denote derivatives. Hence if u = (u 1 , . . . , u m ) is some map the derivatives of its components are We will say several times below that a solution to a certain PDE system exists by standard theorems. This means that the necessary details can easily be found in [9]. We will also need some elementary notions of differential geometry in the analysis; for details we refer to [12].
We will analyze below many rather complicated ODE and PDE systems. Since the analysis often cannot be given very explicitly it is perhaps helpful to discuss the general nature of systems and their solutions. For a general treatment of these issues we refer to [16] and the many references therein. Now all our equations are differential polynomials so that some kind of algorithmic treatment is possible at least in principle. A fundamental theorem in differential algebra says that any radical differential ideal is a finite intersection of prime differential ideals.
One may interpret this by saying that prime differential ideals correspond to certain families of solutions, and hence the theorem says that there are in fact only finitely many essentially different families of solutions.
Another complication is that our systems contain parameters, and in general one must make case distinctions, depending on the values of parameters. The differential elimination algorithm rifsimp [14] which is implemented in Maple and which we will use at times follows the strategy that the default is to look for the largest family. However, it must be kept in mind that there may be also other solutions, not contained in the family given.
In the solutions given below we must at times make case distinctions so that there are several families of solutions, i.e. the differential ideal is not prime. However, we do not need to take into account all cases because we can discard the cases which are irrelevant in the present context. We will explain below more precisely what this means.

Minors
Let A ∈ R 3×m ; the columns of A are denoted by A j . Let us further set p ijk = det(A i , A j , A k ). The following formulas will be useful in computations.

Euler equations
Let us consider the incompressible Euler equations in some domain Ω ⊂ R n . This is called the Eulerian description of the flow and the coordinates of Ω, denoted x, are the Eulerian coordinates. Below we will consider another description which is almost the Lagrangian description of the flow.
Let D ⊂ R n be another domain and let us consider a family of diffeomorphisms ϕ t : D → Ω t = ϕ t (D). The coordinates in D are denoted by z. We can also define Now given such ϕ we can define the associated vector field u by the formula Our goal is to find maps ϕ such that u solves the Euler equations in the three dimensional case.
In the Lagrangian case we should require that α = 1 but here we only require that it is independent of time and nonzero in D.
It will be convenient to interpret the first equation of (3.1) as well as the equation (3.2) in terms of covectors instead of vectors. Then differentiating the equation (3.2) we obtain ϕ ′′ + dp = 0 .
Then the condition of existence of local solutions is simply dϕ ′′ = 0. The exterior derivative cannot be directly computed in x coordinates so we have to pull back this condition to z coordinates. To this end let us introduce Note that if we interpret the vorticity ζ = du as a two-form then h is the vorticity in z coordinates. A fundamental fact is that denoting by L the Lie derivative we have [6,17] ζ t + L u ζ = 0 .
We will often write for simplicity The components h j are classically known as Cauchy invariants [6], and as the name suggests they should not depend on time. Accordingly we see that dϕ ′′ = 0 is equivalent to h ′ = 0. Let us summarize this discussion with Note that here is an essential difference to the 2D case: the condition for h is related to two-forms while the condition for α is formulated in terms of three-forms. In 2D case both conditions are evidently formulated with two-forms.
It is also interesting to express the vorticity ζ as a vector in x coordinates in terms of h. To this end we express first h as a vector using the Hodge star and raising the index; in components this gives From this we get Cauchy's vorticity formula [6]: This can be interpreted either as a pushforward or how the components of vector field change under the change of coordinates.
Let us then consider the maps of the following form where A(t) ∈ R 3×m , v : D → R m and D ⊂ R 3 is some coordinate domain. Our next task is to compute convenient formulas for h and α in this case.
Since all the analysis is local, the precise nature of D is not important in our context. We will try to find maps ϕ such that the corresponding vector field u defined by the formula (3.2) is a solution to the Euler equations. Hence we should find A and v such that the conditions in Lemma 3.1 are satisfied. Since we want that det(dϕ) = 0, this necessarily implies that rank(A) = rank(dv) = 3. Of course we can always assume that det(dϕ) > 0.
Let us denote the coordinates of the intermediate space R m by y. Then we can interpret A as a map R m → R 3 and we can write But from this we immediately obtain This is known as the Cauchy-Binet formula. Then let us introduce To solve the Euler equations we need (3.4) and (3.5) to be constant in time by Lemma 3.1. The crucial observation is that when we fix any t in (3.4) and (3.5), we see that for the time derivatives of α and h to vanish the spatial component only needs to satisfy constraints of the forms where γ ijk and γ ij are constants. Furthermore, when we substitute these equations back to (3.4) and (3.5), we obtain the constraints that p ijk and Q ij have to satisfy.
There is the following curious connection between p ijk and Q ij .
Proof. Let first m = 5 so that Ω = Ω 12345 dy 1 ∧ dy 2 ∧ dy 3 ∧ dy 4 ∧ dy 5 . Let us denote the rows of A byÂ i . Then computing we see that If m > 5 then similar formulas are valid for each group of five indices.
We have now our formulas for α and h but we can still simplify them. First it is important to remember that the domain D is simply some parameter domain which has no physical significance. Hence one can look for the "simplest" possible parameter domain. For future reference let us record this observation as Proof. This follows from standard properties of pullback.
In particular, if g 123 = 0 then we can suppose that v is of the form In each case in which we find the general solution to v, the indices of v are chosen such that g 123 = 0 would imply det(dϕ) = 0. Thus we will always write v as in (3.7).
We can also write the time component in a more convenient way; recall that by QR decomposition we can To see the effect of the QR decomposition on h we first compute Using the Euler parameters to represent the rotations we note that But now we can in a sense eliminate a. Let us set w = 4 H a a ′ andŵ = (0, w 1 , w 2 , w 3 ). Then we can write T w a . Note thatKŵ is antisymmetric which guarantees that |a| stays constant.Kŵ is also a Hamiltonian matrix so that a ′ = 1 4K T w a is in fact a Hamiltonian system with respect to standard symplectic form We can use w as a kind of extra variables in our equations and we can thus write In what follows we will often have conditions where p ijk or Q ij is some constant; in such cases these constants are always denoted by p ijk = e ijk and Q ij = c ij .
Note that the presentation of ϕ = A(t)v(z) is not unique.
Although the statement of the Lemma is really trivial, it is actually quite useful in concrete computations and we will use it repeatedly to simplify the problems below without losing any generality.
Then we note the following simple fact which allows us to discard uninteresting cases.
Lemma 3.5. Suppose that in the representation (3.3)

1.
A j are linearly dependent over R or 2. dv j are linearly dependent over R.
The simple proof can be found in [15]. The problems we study typically have several families of solutions. Some of them are reducible to a case of lower m by Lemma 3.5 and thus do not need to be considered in the same context. In the solutions given below we have verified that all the reducible solution families have been discarded and that they contain all the relevant solution families.
In the following sections we find several solution classes for cases m = 3, m = 4, m = 5, and m = 6. For cases m = 4 and m = 5 there are subcases that we do not consider here since for each m we concentrate on the solutions for which the spatial component has the most freedom. In case m = 6 the systems for A will always be overdetermined and we are lucky to find out that there are at least some solution classes in that case, too.
Let us note that the explicit solutions given in [18] are particular cases of either m = 3 or m = 4.

Cases m = 3 and m = 4
If A ∈ R 3×3 and v ∈ R 3 , any constraint of the form (3.6) would imply α = 0 and thus we may as well assume that v = z by Lemma 3.3. This case might be called the Kirchhoff case because it is analogous to the Kirchhoff solution in 2D case.
In this case for (3.4) and (3.5) to be independent of time we need p 123 = det(A), as well as Q 12 , Q 13 , and Q 23 to be constant. By scaling we may assume that A ∈ SL(3) and by elementary counting arguments we suspect that there are 5 arbitrary functions in the general solution. Writing A = RB, where R ∈ SO(3) and B is an upper triangular matrix, the condition α = 1 Hence there are 5 functions which can be chosen arbitrarily: b 11 , b 22 , w 1 , w 2 and w 3 . Moreover, to demonstrate the usefulness of Lemma 3.4 we present the following simplification. Note that in this case h is also constant in space and that the solution in Euler coordinates is u = A ′ A −1 x.
Then let A ∈ R 3×4 . Let us start with the constraint that α in (3.4) must be constant. Using v = z, f (z) we obtain α = p 123 + p 124 f 001 − p 134 f 010 + p 234 f 100 . Note that we cannot require that all p ijk are constant. Since p 234 A 1 − p 134 A 2 + p 124 A 3 − p 123 A 4 = 0 we must by Lemma 3.5 have a constraint of the form where γ ijk are constants.
Lemma 4.2. We may assume this constraint to be g 234 = 0.
Proof. We may assume that γ 234 = 1. Then the transformationṽ = Hv, where The solution of the equation g 234 = 0 is v = (z 1 , z 2 , z 3 , f (z 2 , z 3 )) for an arbitrary function f . Then we must have as well as Q ij = c ij for each i < j. We can assume that e 123 = 0 and by scaling we may further assume that e 123 = 1. But if e 124 = e 134 = 0 then we have α = 1 and we must also have b 24 = b 34 = 0 which gives

This leads to
where b 12 , b 13 , b 14 and b 23 satisfy the equations given in the proof below. Functions θ, b 11 and b 22 are arbitrary andR is a constant rotation matrix.
Proof. Substituting the above B to the equations Q ij = c ij we first note that Substituting these back to the equations we note that w 2 = w 3 = 0 and Then since w 1 is arbitrary we may as well define θ ′ = w 1 /2 in which case one can easily write down the solution of a ′ = 1 4K T w a. By applying the constant rotation the solution can then be written as indicated. Note that if ϕ is a solution andR is a constant rotation thenRϕ is also a solution. In the sequel we will not anymore write explicitly the rotation in this kind of situation. Then one could ask how "general" vorticity can be constructed with solutions of this form. Proof. Eliminating f from equations h =ĥ gives readily the result.
Note that since dh = 0 there is always an equation h 1 100 + h 2 010 + h 3 001 = 0 forĥ. If we suppose that f depends only on one variable one could compute other families of solutions. However, these solutions are less interesting physically so we do not analyze them further.

Case m = 5
In the case m = 5 it turns out that there are numerous subcases providing solutions, and it is impossible for us to consider all of them here. Hence, we will concentrate on the cases for which the spatial component has the greatest freedom, which happens when we have two equations of the form γ ijk g ijk = 0 as the spatial constraints. The systems obtained for the time component are then overdetermined and usually do not give any solutions if the spatial constraints are chosen arbitrarily. However, we found three cases that do provide solutions: g 134 + g 235 = 0 g 135 − g 234 = 0 , g 134 = 0 g 235 = 0 and g 134 + g 235 = 0 g 135 = 0 . (5.1) These are analogous to the three cases we found for the 2D case in [15, sec. 4]. As we briefly noted in [15], the three 2D cases could be called elliptic, hyperbolic, and parabolic, based on the second-order differential equation that the spatial component functions satisfy in each case. We adopt this terminology here and refer to the three cases of (5.1) as the elliptic, hyperbolic, and parabolic case, respectively. In the current section we present the solution for the spatial component and the time component for these three cases. As examples of the cases we will not consider any further but which give nontrivial families of solutions we may cite

Elliptic case
Let the spatial constraints be g 134 + g 235 = g 135 − g 234 = 0 . The solution to this system is Proof. Eliminating w j from the equations Q ij = c ij we eventually find the following system: This implies that c Note that c 12 must be nonzero.
Proof. Solving for w j and θ ′ , and using the previous Lemma, we note that b 12 = 0 which then readily yields that b 22 = b 11 and By Lemma 3.4 we can add a multiple of any A j to A 3 without loss of generality and thus achieve b 13 = b 23 = 0. Substituting these to the formulas for w j and θ ′ gives the rest.
The corresponding vorticity is then

Hyperbolic case
Here we consider the spatial constraints g 134 = g 235 = 0 , which easily lead to v = z 1 , z 2 , z 3 , f 1 (z 1 , z 3 ), f 2 (z 2 , z 3 ) . Then for the time dependent part each Q ij and the following minors have to be constant: Without loss of generality we may add a multiple of A 5 to A 2 and a multiple of A 1 to A 4 in order to obtain b 12 = 0, after which we add a suitable linear combination of the columns of A to A 3 to obtain b 13 = b 23 = 0. By scaling we may also assume k 2 = 1. Finally we note that this form of solutions forces w = 0.
In this case the vorticity can be written as

Parabolic case
Here we suppose that g 134 + g 235 = g 135 = 0 , which implies that v is of the form v = z, f 1 + z 2 f 2 100 , f 2 and f j = f j (z 1 , z 3 ). Then for the time component Proof. Since so many constants are zero one immediately computes that ℓ ′ = c 12 /b 2 12 and b 13 = b 14 = w 2 = w 3 = 0. Using these we obtain that also w 1 = b 24 = 0 and then we are left with the equation c 45 b 2 23 b 4 12 + c 12 = 0. Scaling gives then the solution above. The vorticity is now 6 m = 6: hyperbolic case In these final three sections we study cases where m = 6. When m = 6 the computations really become much more involved than for smaller m. Also, coming up with spatial constraints that yield solutions to both the spatial and the time component is difficult. We managed to find three sets of spatial constraints that provide solutions, which are again analogous to the three cases we considered in [15, sec. 4] and to the ones considered in the previous section. But then we also found that by restricting the time component of these solutions we obtain more general solutions to the spatial component which we could not have come up with otherwise.
In this section we consider the spatial constraints

The solution of the spatial component is
Incidentally one can check that with this type of v one can find nontrivial solutions in any dimension.
The conditions for the time dependent part give that the following functions are constant: where c j are constants. Proof. We may assume that e 123 = e 156 = 1. To satisfy the determinant conditions we need to have We may assume c 23 = c 56 = 0. Then the computations show that Since c 26 and c 35 have to be nonzero, the columns A 2 , A 3 , A 4 , A 5 , and A 6 are always linearly dependent and the statement follows from Lemma 3.5.
In case (1) of Lemma 6.1 we obtain two families of solutions. To describe the computations in detail is not really possible so that we merely present the main ideas which lead to the solutions. However, it is easy to verify that the claimed solutions are actually solutions since one can simply substitute everything to the relevant equations and check that the conditions are satisfied. Of course the computations by hand would be extremely tedious, but with a convenient computer algebra system the verification is easy.
From the conditions of the case (1) it readily follows that there are functions ℓ j such that we can write either By symmetry we may choose the first option. We can thus write A = RB where (ii) if there is a direct algebraic relation then it turns out that the constants must be chosen so that we can write ℓ 2 = 1 m 1 ℓ 1 + m 0 . Hence one function, for example b 11 , can be freely chosen.
Proof. Avoiding algebraic relation between ℓ 1 and ℓ 2 forces all constants c ij to be zero except c 16 , c 24 and c 35 . After this the solution given above follows.
Evidently k 0 , k 1 , k 2 , m 0 and m 1 are nonzero. The constants k 3 , k 4 and k 5 are arbitrary, except that choosing k 3 = k 4 = k 5 = 0 is incompatible with the hypothesis ℓ 2 = 1/(m 1 ℓ 1 + m 0 ). Apparently there is no explicit solution to the differential equation for ℓ 1 . However, there is the following discrete symmetry: if µ is a solution thenμ(t) = m 2 0 /(m 2 1 µ(−t)) is also a solution. In this case the vorticity has the form where e j are some constants.

Extensions for the spatial component
Let us assume that A is of the form where the functions b ij are like in Theorem 6.3. As we have seen in this case the spatial part is of the form v = z 1 , z 2 , z 3 , f 1 (z 1 ), f 2 (z 2 ), f 3 (z 3 ) . However, if we a priori put some restrictions on b ij it is possible to have more general solutions in the spatial part. There are several possibilities to choose these restrictions, for example: All these choices lead to nontrivial solutions but we only consider the first case as an example to give a flavor of the idea. Now if b 11 = b 22 and b 16 = b 24 then it is easy to compute that actually b 11 = e ct and b 16 = e −ct . It follows that each Q ij is constant so h is automatically independent of time, and for the determinant condition the spatial part only needs to satisfy g 145 + g 256 = g 236 − g 134 = g 125 = g 346 = 0 .
We can "solve" this as follows. From the first equation we have ∇g 3 = g ∇g 1 for some g. But then the second equation implies that g 1 01 + g g 1 10 = 0. But clearly we must in addition require g 01 + g g 10 = 0 which leads to the following triangular system: g 01 + g g 10 = 0 , g 1 01 + g g 1 10 = 0 ∇g 3 = g ∇g 1 , g 1 g 2 01 + g 3 g 2 10 = 0 . Existence of local solutions now follows from standard theorems. For example if g = (z 1 − c 1 )/(z 2 − c 2 ) then Let the spatial constraints be where f 1 is arbitrary, and f 2 and f 3 are anti-CR.

The simplest solution set to the time constraints is
where θ 0 = 0 is a constant.
The determinant condition gives the following equations: Let us first reduce as many constants to zero as possible. Proof. We may assume that p 123 = 1 and p 124 = 0. Computations show that we must have p 456 = 0 and p 356 = 0, and we may assume by scaling that p 356 = 1. Then we can write where the blocks H j are as in the corners of (5.2), we can bring c 1 and c 2 to zero. But then we find that c 24 A 1 − c 14 A 2 + ((c 12 − c 56 )/2)A 4 − c 46 A 5 + c 45 A 6 = 0 . Here c 12 − c 56 = 2θ ′ b 11 must be nonzero so the columns of A are linearly dependent. Thus the solutions reduce.
The solutions of the second case also reduce. Lemma 7.3. In case (2) of Lemma 7.1 all solutions for A reduce to a case of lower m.
Proof. Again, we may assume p 123 = 1, p 124 = 0. If p 145 − p 246 = 0, we can reduce the situation to case (1). Thus we may assume that p 145 − p 246 = 0, and, after adding a multiple of A 4 to A 3 , that p 135 − p 236 = 0. With these determinant constants we may write There is a function θ and constants k j , γ such that ℓ 1 = k 1 + k 0 cos(θ), ℓ 2 = k 2 + k 0 sin(θ) Proof. Let r ij = B i , B j as before. Four of the constraints can be written as follows: Solving r ij from the first three equations and substituting we obtain from the fourth that (ℓ 1 , ℓ 2 ) traces a circle. This gives the first formula and we have also c 23 = −2c 12 k 1 + c 14 and c 24 = 2c 12 k 2 − c 13 . Using these ℓ j we can compute the expressions for r ij . Since b 2 11 b 2 22 = r 11 r 22 − r 2 12 we get the second formula where Hence to have nontrivial solutions we must have k 0 = 0, c 12 = 0 and γ = 0. Without loss of generality we can also assume that k 1 = k ≥ 0, k 2 = 0 and k 0 = 1. It turns out that we have two families of solutions.
Hence both b 15 and b 25 must be nonzero, and also c 46 m 0 + c 36 m 1 = 0.
Proof. Substituting what we found already to the equations we notice that there are four equations which can be written as Here the elements of M are polynomials in cos(θ), sin(θ) and various constants. To have nontrivial solutions all 3 × 3 minors of M have to be zero. This is possible; a necessary condition in all cases is c 35 = c 45 = 0. But when k = 1 then the first column must be zero and the equations do not contain b 11 . However, the second and third columns are linearly independent for all acceptable choices of constants which implies that b 15 = b 25 = 0.
If k = 1 then the solution is obtained by simple computation. Theorem 7.6. The solution can be written as follows when k = 1: Writing v = z, f (z) as before and supposing that f 2 and f 3 are not anti CR with respect to (z 1 , z 2 ) we obtain a system of the form f k 001 = g k , 1 ≤ k ≤ 3 , f 3 010 = g 4 , where the (real analytic) functions g k do not contain derivatives with respect to z 3 . The analysis of this system turns out to be surprisingly tricky, so we simply give some indications of the necessary steps. The terms in italics are defined and explained in [16].
If we had only the equations f k 001 = g k for 1 ≤ k ≤ 3 then the local solution with initial conditions f k (z 1 , z 2 , 0) = h k (z 1 , z 2 ) would exist by Cauchy-Kovalevskaia Theorem. Now one could hope that if the functions h k satisfied the final equation then this would lead to a solution. Indeed there is a generalization of Cauchy-Kovalevskaia Theorem, namely Cartan-Kähler Theorem, which can be applied to the overdetermined case. However, this theorem requires that the system is involutive; in this case one can write the system in the Cartan normal form from which one can see what kind of initial conditions are appropriate.
Unfortunately it happens that our system is not involutive. Now any system can in principle be transformed to an involutive form by prolongations and projections. This is sometimes called the Cartan-Kuranishi algorithm. Typically the concrete computations are difficult, even for a computer. We checked that the symbol of the system becomes involutive after two prolongations, and then there is one integrability condition of third order. We did not check if adding this integrability condition produces an involutive system or if there are further integrability conditions of higher order. Anyway in this way one can in principle compute the involutive form after which the existence of the local solution follows from the Cartan-Kähler theorem.

m = 6: parabolic case
This case has the spatial constraints The solution to this is v = (z 1 , z 2 , z 3 , f 1 (z 3 ), z 2 f ′ 3 (z 1 ) + f 2 (z 1 ), f 3 (z 1 )) . The determinant constants can be reduced to several different cases. It is best to omit the consideration of the ones that do not give solutions as they just contain long calculations similar to what we have seen. The only reduced case for the determinant constants that provides irreducible solutions is where p 123 = 0, p 356 = 0, p 235 = 1, p 135 − p 236 = 0, p 124 = 0, p 456 = 0, p 245 = 0, p 145 − p 246 = 1.
It turns out that we must have either p 125 = p 345 = 0 or p 234 = p 256 = 0 and we may assume p 125 = p 345 = 0 by exchanging A 1 with A 6 and A 2 with A 5 if needed. The determinant conditions are thus satisfied if for some functions ℓ 1 and ℓ 2 and in this case we have det(dϕ) = f 2 100 + z 2 f 3 200 − f 1 001 f 3 100 = 0 . Now using the QR decomposition with respect to columns 2, 3 and 5 we can thus write Let us present the constraints for v when A is as above. The equations v needs to satisfy are g 135 = g 246 = g 235 + g 136 + g 145 = 0 .