On the explicit solutions of separation of variables type for the incompressible 2D Euler equations

We study explicit solutions to the 2 dimensional Euler equations in the Lagrangian framework. All known solutions have been of the separation of variables type, where time and space dependence are treated separately. The first such solutions were known already in the 19th century. We show that all the solutions known previously belong to two families of solutions and introduce three new families of solutions. It seems likely that these are all the solutions that are of the separation of variables type.

In the present article we do not use complex analysis at all. The reason is simple: complex analysis is not needed, and the analysis given below is quite naturally formulated in terms of real functions and real variables. The new families of quasi Lagrangian solutions given below come naturally from the systematic analysis of the problem in the real domain. Indeed, the only reason we can think of why these families were not discovered previously is that their description using complex functions would be quite awkward. Also the harmonicity of functions plays no role in these new solutions and finally our analysis is local so the question if given maps are analytic or merely differentiable is irrelevant in the present context.
Since we are not using complex functions, it is not so easy to compare our solutions to the previously known cases. For example, if the reader takes a look at our formula (3.6) and compares it to the essentially equivalent formula (25) in [11,Theorem 3], then it is clear that the equivalence is not immediately obvious. Anyway it seems that all the previously known solutions reduce either to the situation described in Section 3.1 (this could be called the Kirchhoff type case) or to the family of solutions given in Theorem 5.1 (the Gerstner type case). Solutions of these types can be found using harmonic maps, and even though there have previously been hints that even more complicated solutions exist [6,18], we show in this paper how they can all be reduced to these cases. Also, as far as we know, the families of solutions in Theorems 3.1, 5.3 and 5.4 are new, the first of which is a generalization of the Kirchhoff type. Thus we have four essentially different families of solutions, and apparently they give all the quasi Lagrangian solutions that are of the separation of variables type. We will not prove that there cannot be more solutions of this type but we discuss below why we think that the existence of essentially different solutions is unlikely.
One could also ask how big the families of solutions are. One way to measure this is to count the number of the arbitrary functions and constants in the general solution. Another physically interesting point of view is to ask if one can find a solution with prescribed vorticity. For all four families of solutions, we can compute a certain PDE, such that if the vorticity is a solution to this PDE, then there is an explicit solution with this vorticity. In one case the relevant PDE is obvious while in the remaining three cases we have used the algorithm rifsimp [19], which is based on the ideas of the formal theory of PDE [20].
The Lagrangian framework has been and is still being used in many different contexts. In addition to the bulk flow, an interesting aspect is to model the flow in the presence of an air/water interface. In some other applied problems the equations are not precisely the Euler equations; for example, in the large scale ocean current and meteorological problems it is important to take into account the Coriolis effect. Anyway we hope that our new solutions will be useful also in these more general problems. For various aspects of the applications of the Lagrangian point of view we refer to [1,3,4,9,10,15,17] and the many references therein.
The paper is organized as follows. In Section 2 we collect some necessary background material. In Section 3 we formulate the problem precisely and analyze the first family of solutions, of which the Kirchhoff type case is a special case. Then in Sections 4 and 5 we show that there are three more families of solutions, one of which is the Gerstner family and the other two are new. Finally, in Section 6 we discuss to what extent one can prescribe the vorticity of the solutions.

Notation
Let v = (v 1 , . . . , v m ) : R n → R m be some map and α ∈ N n a multiindex. For spatial derivatives we use the jet notation: If v depends also on time we may use for the time derivative v t or v , whichever is more convenient in a given formula. For functions a that depend only on time we always use a for derivative.
In the analysis we will meet the Cauchy-Riemann equations in two different forms so to avoid confusion let us introduce the following terminology. Let v : R 2 → R 2 be some map and consider the following PDE The left system will be called the CR system and the right system the anti CR system. The solutions to the left system are CR maps and to the right system anti CR maps. Let us also introduce the rotations and reflections where θ is a function of time.
The minors of various matrices appear frequently in the computations and so it is convenient to recall some facts about them. Let A ∈ R 2×k and let us denote the columns of A by A j ; then the minors of A will be denoted by p ij = det(A i , A j ). Also when v : R 2 → R k is some map then the minors of its differential dv are denoted by In the analysis below we will repeatedly use the following simple facts.
Lemma 2.1 Suppose that A i = 0 and p ij = p ik = 0; then also p jk = 0. In addition If ϕ = Av then we have the Cauchy-Binet formula det(dϕ) = 1≤i<j≤k p ij g ij .

Overdetermined PDE
In some computations below we have used the algorithm rifsimp [19] which is implemented in Maple. The acronym rif means reduced involutive form and the word involutive refers to the fact that general systems of PDE can be transformed to an involutive form. For a comprehensive overview of overdetermined or general PDE systems we refer to [20].
An analogous situation arises in polynomial algebra [13]. A polynomial system generates an ideal, which in turn defines the corresponding variety. Now computing the Gröbner basis of the ideal gives a lot of information about the variety. Similarly the involutive form can reveal important information about the structure of the solution set. Intuitively one may think about computing the involutive form of a system of PDE like computing the Gröbner basis of an ideal.

Euler equations
Let us consider the incompressible Euler equations in some domain Ω ⊂ R n . This is called the Eulerian description of the flow and the coordinates of Ω, denoted x, are the Eulerian coordinates. Below we will consider another description which is almost the Lagrangian description of the flow.
Let D ⊂ R n be another domain and let us consider a family of diffeomorphisms ϕ t : D → Ω t = ϕ t (D). The coordinates in D are denoted by z. 1 We can also define Now given such ϕ we can define the associated vector field u by the formula Our goal is to find maps ϕ such that u solves the Euler equations in the two dimensional case. To state the relevant conditions, let us introduce the following matrices: Straightforward computations show (see for example [18] for details) that we get the following conditions. Theorem 2.1 Let h = det(P 1 ) + det(P 2 ) and let us suppose that the following conditions are satisfied: Then u given by (2.2) is a solution to (2.1).
In this case the Lagrangian description of the flow is given by the map Φ t = ϕ t • (ϕ 0 ) −1 . Note that without loss of generality we can suppose that det(dϕ) > 0. It is also interesting to formulate the above condition in terms of vorticity. Recall that in the x coordinates the vorticityζ = u 2 10 − u 1 01 . Let us denote by ζ the vorticity in the z coordinates, i.e. ζ =ζ • ϕ t . Recall that in 2 dimensions, if u is a solution to the Euler equations, thenζ t + u, ∇ζ = 0 . In the z coordinates this simply means that ζ t = 0. But then again straightforward computations show that in fact ζ = h det(dϕ) .
Hence the condition of the previous Theorem could also be formulated using the vorticity instead of h.
In what follows we will try to find the most general solution of the given form. Then it is important to remember that the domain D is simply some parameter domain which has no physical significance. Hence one can look for "simplest" possible parameter domain. For future reference let us record this observation as Proof. This is because det(dφ) = det(dϕ) det(ψ) andζ = ζ.

General formulation of the problem
Let us consider the maps of the following form where A(t) ∈ R 2×k , v : D → R k and D ⊂ R 2 is some coordinate domain. Since all the analysis is local, the precise nature of D is not important in our context. We will try to find maps ϕ such that the corresponding vector field u defined by the formula (2.2) is a solution to the Euler equations. Hence we should find A and v such that the conditions in Theorem 2.1 are satisfied. Since we want that det(dϕ) = 0, this necessarily implies that rank(A) = rank(dv) = 2.
The strategy we use to tackle this problem is described now. Since det(dϕ) is independent of time, Lemma 2.1 implies that Now, if we fix any t, we obtain from this formula a homogeneous linear equation for the minors of dv: We also recall from [18] that if ϕ is given by (3.1) then This condition also gives equations of form (3.3) when t is fixed. We conclude that for the most general solution we should look for solutions for which v satisfies a system of constraints of form (3.3).
By integrating (3.4) we obtain (3.5) The analysis of the time component of the solutions will be based on formulas (3.2) and (3.5).
If there are no spatial constraints then there are k(k − 1) conditions for the 2k time components of A, since every p ij and Q ij in (3.2) and (3.5) has to be constant. Each spatial constraint of the form (3.3), however, decreases the number of time constraints by 2. On the other hand, we need to be able to choose at least two of the spatial variables arbitrarily because of Lemma 2.2 so we expect the number of spatial constraints to be at most k − 2. In this case there would be k 2 − 3k + 4 equations for the 2k functions. This means that for k ≤ 4 we can expect to find solutions but for k > 4 we will obtain an overdetermined system. We will give a complete analysis of the cases k = 2, k = 3, and k = 4 in this paper. It appears that for k > 4 there really are no solutions but we could not find a sufficiently neat way to prove this.
In the analysis we often have situations where a certain case reduces to a case of smaller k. For future reference we record these simple observations.
1. If some A j is a constant linear combination of other columns then the problem reduces to a similar problem with smaller k.
2. If some v i is constant the problem reduces.
3. If some v i is a constant linear combination of other v j then the problem reduces.

Proof. 1. For example let us suppose that
Then we can set Hence ϕ = Av =Ãṽ.
2. Let us then suppose that v k = c = constant and letṽ = (v 1 , . . . , v k−1 ). Then But the conditions in Theorem 2.1 do not depend on the term cA k .
Then we can set Hence ϕ = Av =Ãṽ.
Also with respect to time one has a simple invariance.

Lemma 3.2 Suppose that some ϕ = A(t)v(z) is a solution and letφ = M (θ)ϕ orφ =M (θ)ϕ. Thenφ is a solution if and only if
Proof. This is a simple computation using the criteria of Theorem 2.1.
Hence, if convenient we can always rotate or reflect our solution with such a matrix. Note that the rotation adds a constant to the vorticity: ifφ = M (θ 0 t)ϕ, thenζ = 2θ 0 + ζ.

Case k = 2
Let us briefly recall what happens when k = 2, see also [11,18] for more details. Then according to Lemma 2.2 we can assume without loss of generality that v(z) = z. In this case the coordinates z are in fact Lagrangian coordinates, and the corresponding vector field in Eulerian coordinates is given by The conditions (3.2) and (3.5) are now where e and c are constants. The solution can be written explicitly for example in the following way. We have ϕ = Az where Here θ and r are arbitrary functions of t. Note that this is a QR decomposition of the matrix A. We may take e = 1 without loss of generality, so that A is a curve in SL (2).
Hence one can describe the degree of generality of the solution by saying that one can choose arbitrarily two functions of time. The solution set can also be given in a very different form using complex analysis, like in [11]. Note that there is no real choice for function v, one can say that it is uniquely defined in the sense of Lemma 2.2. In spite of the relative triviality of this case the well-known Kirchhoff solution is of this form [16].

Case k = 3
Somewhat surprisingly and to the best of our knowledge this case has not been investigated before. Let us first state the main result, which turns out to be a generalization of the above case. To this end we first define the following matrices: Here r, θ, a 1 and a 2 are functions of t.
and ϕ = Av where A is as above. Then this gives a solution to Euler equations if a 1 = 2θ /r 2 and a 2 = −1/r 2 .
Proof. Using the criteria of Theorem 2.1 one easily verifies that this is a solution.
As mentioned, (3.6) is a special case of this Theorem, obtained by choosing f (z 2 ) = c z 2 . Note that here, too, we can actually achieve det(dϕ) = 1 so that the coordinates z are in fact real Lagrangian coordinates.
While it is easy to check that we indeed obtain a solution, it is not so easy to prove that this is essentially the most general solution of this form. Note that we have here two arbitrary functions of time, namely r and θ, like in the case k = 2. However, in addition we have one arbitrary function of one variable in z coordinates, namely f . However, there is no canonical form of the solution. For example one could takẽ v = z 1 , f (z 1 ), z 2 . Then modifying the matrix A a little we can still get a solution, but the degree of generality remains the same.
Note that we can find a solution with prescribed vorticity in the sense that given any ζ that depends only on z 2 we can find the corresponding f by simple integration.
Let us show how to find the complete solution set. For ϕ to be a solution, the constraint equations det(dϕ) = p 12 g 12 + p 13 g 13 + p 23 g 23 , h = Q 12 g 12 + Q 13 g 13 + Q 23 g 23 (3.7) have to be independent of time. Proof. Without loss of generality we may assume that p 12 = 0. Then we have But if there are no constraints for the spatial variables then each p ij must be constant and the problem reduces by Lemma 3.1.
If we have one constraint this can be put in a simpler form.
Lemma 3.4 If there is one constraint for the spatial variables, then without loss of generality we can assume that g 23 = 0 and we can choose p 12 = 1 and p 13 = 0 in (3.7).
Hence we expect that there can be only one constraint in the spatial domain.
Lemma 3.5 If there are two constraints for the spatial variables then either det(dϕ) = 0 or the problem reduces.
Proof. Lemma 3.1 implies that if ∇v j = 0 for some j the problem reduces, so we may suppose that ∇v j = 0. We have seen that we can assume that one constraint is g 23 = 0 and thus v 3 = f (v 2 ) for some f . But then the other constraint is of the form  Proof. We have seen that without loss of generality we may suppose that the constraint is g 23 = 0. The solution to this equation is v 3 =f (v 2 ) for an arbitrary functionf . By Lemma 2.2, we may thus assume that v = z 1 , z 2 ,f (z 2 ) . Substituting g 23 = 0 to (3.7) implies also that h = Q 12 g 12 + Q 13 g 13 is independent of time and so there are constants c j such that we see that we can also choose c 1 = 0 and c 2 = 1.
Let us illustrate how a solution of this type might look like. Note that there cannot be any periodic solutions apart from those that can be obtained by considering the case k = 2. Since there is a lot of freedom in choosing the various functions, many different kinds of cases are possible. In particular the motion of a single particle can be quite complicated depending on the choice of the arbitrary functions. But there appears like a wavefront defined by A 1 : at each t, the points whose z 2 coordinates are equal are all on the same line parallel to A 1 . In Figure 3 Theorem 4.1 If there are less than two constraints or more than two constraints, then either rank(dv) < 2 or the problem reduces to the case k < 4.
Theorem 4.2 If there are two constraints, then without loss of generality we may assume that they are g 24 = g 13 = 0 or g 24 = g 14 = 0 or g 24 + g 13 = 0 g 14 = 0 or g 24 + g 13 = 0 The proof will be based on several Lemmas.
Lemma 4.1 Without loss of generality we may assume that one constraint is of the form g 24 + c g 13 = 0 .
Proof. Letṽ be a vector and letg ij be the corresponding minors of dṽ. Then one constraint can be written as α ijgij = 0 .
Without loss of generality we may assume that α 24 = 1. Then let us introduce the following matrix  Proof. If there is only one spatial constraint, by the above Lemma we may assume it to be g 24 + c g 13 = 0. Then the determinant conditions for A imply that the five expressions  Proof. By Lemma 4.1 we already know that one constraint can be written as g 24 + c 0 g 13 = 0. Hence if the second constraint is g 13 = 0 we have our first case.

Now let us set
Then we setṽ = Hv. With this substitution the first constraint is the same as before and the second is of the form g 14 + c 1 g 13 + c 2 g 23 = 0 for some constants c j .
Note that the first case of  Proof. Let us show that if c 0 = 0 the problem reduces to the previously known cases.
The case c 0 = c 1 = 0 and c 2 = 0. Here we can swap v 1 and v 2 to obtain g 14 = g 24 + c 2 g 13 = 0 and the system is in the desired form. Letṽ be our vector and let us denote the corresponding minors byg ij . We have to show that in the remaining cases we obtain the fourth case or another known case. By Lemma 4.4 we may assume that c 0 = 1 and hence we have to reduce constraints of the form g 24 +g 13 = 0 g 14 + c 1g13 + c 2g23 = 0 to a simpler form. Let . If the polynomial p = x 2 − c 1 x − c 2 has distinct real roots, then choosing β j to be these roots we obtain g 24 = g 13 = 0, the first case in Theorem 4.2. If there is a double root then choosing β 1 = c 1 /2 we get g 14 = 0, which leads to the third case. If the roots are complex we choose β 2 = (2c 2 + c 1 β 1 )/(2β 1 − c 1 ) which leads to (4c 2 + c 2 1 )g 14 + (c 1 − 2β 1 ) 2 g 23 = 0 . Since 4c 2 + c 2 1 < 0, we can further reduce this to g 14 − g 23 = 0 by scaling. Now in Theorem 4.2 we have four PDE systems for the vector v. So the next task is to find the general solutions to these systems. However, one of the cases can be discarded.  1. If g 13 = g 24 = 0 then we can take v = z 1 , z 2 , f 1 (z 1 ), f 2 (z 2 ) .
Proof. In each of the three cases we must have g 12 = 0. Indeed, otherwise the equalities of Lemma 2.1, combined with the conditions of any of the three cases, imply that all the minors are zero and thus det(dϕ) is zero. Therefore by Lemma 2.2 we may choose a labelling with v 1 = z 1 and v 2 = z 2 . It is difficult to show directly that these three cases are actually different, i.e. they cannot be reduced to each other. We will show this later in Lemma 5.6.
Then we should prove Theorem 4.1. We now already know that two constraints can be reduced to the cases in Since we know that v = z 1 , z 2 , f 1 (z 1 ), f 2 (z 2 ) , then simply substituting this to the third equation gives It is straightforward to check the solutions are affine and hence problem reduces.
We know that v = z 1 , z 2 , z 2 f 1 (z 1 ) + f 2 (z 1 ), f 1 (z 1 ) . Hence the third equation is Again it is easy to check that the solutions are affine and the problem reduces.
is an anti CR map. The first two constraints are thus the anti CR system and the third constraint can be written as c 1 g 12 + c 2 g 13 + c 3 g 23 + c 4 g 34 = 0 .
Using the anti CR system to eliminate v 4 we thus obtain a system ∆v 3 = 0 , Using rifsimp one easily verifies that the solutions are necessarily affine and thus the problem reduces.

Comparison of cases 1 and 3
Let us point out a relationship between cases 1 and 3, which is in a way hidden in the formulation given. In case 3 we have thus v = z 1 , z 2 , v 3 , v 4 where (v 3 , v 4 ) is an anti CR map and in case 1 v = z 1 , z 2 , f 1 (z 1 ), f 2 (z 2 ) where f j are arbitrary. But now recall that the general solution of the one dimensional wave equation u 11 = 0 can be written as So in a way case 3 is an elliptic case and case 1 is a hyperbolic case. In fact we could have used a different basic form in Theorem 4.3 to make the connection more explicit. Like in case 3 we have an anti CR system, in case 1 we could have used the coupled wave system 3, which is more convenient to represent the solutions to Euler equations. Taking this point of view we thus obtain a new family of solutions, case 1, from the old one, case 3, by changing one sign in the anti CR system. We will see that this elliptic/hyperbolic character also shows up when we compute the corresponding vorticities below.

k = 4, the time dependence
Now we begin the analysis of the time component A in the three relevant cases shown in Theorem 4.3.

Case 3
In this case the spatial constraints are g 14 − g 23 = g 24 + g 13 = 0 (5.1) and we have seen that we may take v = (z 1 , z 2 , f 1 , f 2 ) for some anti CR map f = (f 1 , f 2 ). This has already been studied previously [5,6,11,18]. A famous example of this case is the Gerstner map [14]: In this case we compute det(dϕ G ) = 1 − e 2kz2 and ζ G = 2µe 2kz2 1 − e 2kz2 . In general we have the following result.
Theorem 5.1 Let f be any anti CR map such that 1 − |∇f 1 | 2 = 0 in D. If then ϕ gives a solution to Euler equations and in this case Proof. This is again a simple computation using the criteria of Theorem 2.1.
Note that now we have practically no choices in the time domain, but in some sense more choices in the spatial domain than in the previous cases.
In Figure 5.1 we have an example of this case with f = z 2 1 − z 2 2 + 1/20, −2z 1 z 2 , µ = 1 , θ 0 = 1/2 , where θ 0 is the coefficient of the implicit rotation matrix M (θ 0 t) that we can premultiply the solution by, according to Lemma 3.2. Before the proof that this is indeed the most general form of the solution let us make a few comments of the form of the solution. Previously solutions of this type have been given in different forms and so let us indicate what is the relationship between various formulations. Let w = (w 1 , w 2 ) andŵ = (ŵ 1 ,ŵ 2 ) and let θ 0 and µ 0 be some constants. Then one could look for the solutions of the form As explained in [18] in the PDE system for w andŵ, namely the system (5.1), one can for example give w arbitrarily and then solve the correspondingŵ. This is a regular elliptic system forŵ. Note that here (anti) CR maps play no role a priori. However, it has been known that if w is a CR map andŵ an anti CR map then this provides a solution to the equations (5.1) [4,6]. In fact it seems that all the solutions that were known before [18] assumed the harmonicity of w andŵ.
Anyway we have the following simple observation. Hence even if w andŵ are not (anti) CR maps they are connected by an anti CR map.
Proof. Without loss of generality we may suppose that det(dw) = 0. Hence there is some map f such that w = f • w. Then substituting this to the system shows that f must be an anti CR map. Now using w as new coordinates we obtain solutions which are as given in Theorem 5.1. Note that the form (5.3) can be very useful because it may be possible or more convenient to compute w andŵ directly, in which case typically f is not explicitly known.
Let us then turn to the proof that the most general solution is given by Theorem 5.1, taking into account Lemmas 2.2 and 3.2 as always. Using (5.1), the conditions of Theorem 2.1 give that det(dϕ) = p 12 g 12 + p 34 g 34 + (p 13 − p 24 )g 13 + (p 14 + p 23 )g 14 , h = Q 12 g 12 + Q 34 g 34 + (Q 13 − Q 24 )g 13 + (Q 14 + Q 23 )g 14 are constant w.r.t. time. Hence there are constants e j and c j such that First we can reduce the problem to a simpler form. Here e 3 and e 4 cannot both be zero because otherwise det(dϕ) = 0. Thus, after this transformation we havẽ e 1 =p 12 = 0 orẽ 2 =p 34 = 0, and by symmetry we may assume the former. Proof. By the previous Lemma we may suppose that where B ∈ SL(2). If B = M (µ), we obtain immediately that µ and β are constants and we get the required form using Lemma 3.2.
If B is not a rotation, it can be written as where s = 0, µ, and θ are some functions. The conditions in the second row of (5.4) give the following equations: Evidently θ − µ − β must be constant. It follows that clearly β and s are constants, and further that µ and θ are constants. Hence we can write where µ 0 , µ 1 , θ 0 , θ 1 , and β 0 are constants. Let us set β 1 = θ 1 − µ 1 . Using Lemma 3.2 we can premultiply by the matrix M (−µ 1 t − µ 0 ) so that without loss of generality we may assume that Now it is straightforward to check that (w,ŵ) satisfies the system (5.1), and hence by Lemma 5.1 we may take w as new coordinates, which then gives the required form.
Thus we have constants e j , c j such that Proof. Due to symmetry, we may assume that p 12 = 0 and further that p 12 = e 1 = 1. Let be some function and let Proof. By the previous Lemma we may assume that det(A 1 , A 2 ) = 1, A 3 = A 2 and A 4 = A 1 / . The conditions in the second row of (5.5) give the following conditions for A: Hence ( / ) 2 = constant and thus we can take (t) = e 2ct . Then we see that A 1 , A 2 is constant and hence we may write whereÂ j are constant vectors. Now we check that in fact θ is a linear function so by Lemma 3.2 we can drop M . Then by a constant rotation we can assume thatÂ 1 = (r, 0) so that at present we can write  Now a single particle has a fairly straightforward trajectory: when the absolute value of t is large, then it is approaching the origin approximately along the line parallel to the vector (f 2 (z 2 ), z 2 ) if t is negative, or moving away from the origin approximately along the line parallel to the vector (z 1 , f 1 (z 1 )) if t is positive. Thus its trajectory resembles a hyperbola. When t is negative, the particles are grouped according to the z 2 coordinate, and when t is positive, they are grouped according to the z 1 coordinate. Then, if we rotate the solution with M (θ 0 t) the particles are also rotating around the origin as they move towards it or away from it. In Figure 5.2 we have chosen c = 1, θ 0 = 1/2, f 1 = 3 cos(3z 1 )/(2 + 2z 2 1 ) and f 2 = − sin(3z 2 /2)/4 + sin(4z 2 )/2 for an example.

Case 2
Now we have the following equations for the time component: Proof. If e 4 = 0, Lemma 2.1 implies that p 13 and p 24 are constants. Thus the problem reduces.
Theorem 5.4 If v satisfies g 13 + g 24 = g 14 = 0, then the solution ϕ = Av is given by Proof. By the previous Lemma we may assume that det(A 2 , A 3 ) = 1, A 1 = A 2 and A 4 = A 3 . Hence the conditions for A can be written as Evidently |A 2 |, |A 3 | and are constants. Then it is easy to compute that the solution is of the form where we may assume b 1 = 0. Now the transformationÃ = AH, v = Hṽ, where preserves the spatial constraints and gives the desired form to the time component.
Except for the possible rotation of constant speed, the trajectory of each particle is a line segment parallel to the vector (z 1 , f 1 (z 1 )). Proof. Let us prove that cases 1 and 2 are inequivalent. The proof for the rest of the pairs is similar. If cases 1 and 2 were equivalent, then there would be a solution ϕ = Av =Ãṽ, where v is an instance of case 1 andṽ = Hv an instance of case 2. But then we would also haveÃ = AH −1 , where A is a solution to case 1 given by Theorem 5.3 andÃ a solution to case 2 given by Theorem 5.4, and there is clearly no matrix H that can satisfy this.

Vorticity
Let us finally say a few words about vorticity. Above we have computed some families of solutions and the corresponding vorticities. However, one could also ask if one can find a solution with a prescribed vorticity. Let us examine each of the relevant cases. (ζ 2 + 4c 2 )ζ 11 − 2ζζ 10 ζ 01 = 0 .
Proof. Note that the equation (6.1) is not "overdetermined" in the usual sense. However, the right hand side is of the separation of variables type, so the left hand side cannot be completely arbitrary. Giving this equation to rifsimp and specifying the elimination order that eliminates the functions f j produces the given PDE.
Note that we can actually find one family of solutions to the vorticity equation: Here d j are constants. Of course this is not the general solution. Note also that the equation for vorticity is a kind of a nonlinear wave equation.
Proof. Since f is an anti CR map we have also ∆f 1 = 0, so again ζ cannot be arbitrary. Using rifsimp to eliminate f 1 we obtain the above PDE for ζ.
Again one can find a specific family of solutions: In this case the vorticity equation is a nonlinear elliptic equation. where the functions g j are arbitrary.