Chaos in stochastic 2d Galerkin-Navier-Stokes

We prove that all Galerkin truncations of the 2d stochastic Navier-Stokes equations in vorticity form on any rectangular torus subjected to hypoelliptic, additive stochastic forcing are chaotic at sufficiently small viscosity, provided the frequency truncation satisfies $N\geq 392$. By"chaotic"we mean having a strictly positive Lyapunov exponent, i.e. almost-sure asymptotic exponential growth of the derivative with respect to generic initial conditions. A sufficient condition for such results was derived in previous joint work with Alex Blumenthal which reduces the question to the non-degeneracy of a matrix Lie algebra implying H\"ormander's condition for the Markov process lifted to the sphere bundle (projective hypoellipticity). The purpose of this work is to reformulate this condition to be more amenable for Galerkin truncations of PDEs and then to verify this condition using a) a reduction to genericity properties of a diagonal sub-algebra inspired by the root space decomposition of semi-simple Lie algebras and b) computational algebraic geometry executed by Maple in exact rational arithmetic. Note that even though we use a computer assisted proof, the result is valid for all aspect ratios and all sufficiently high dimensional truncations; in fact, certain steps simplify in the formal infinite dimensional limit.


Introduction
Chaos is fundamental to our understanding of fluids and fluid-like systems in realistic settings and is thought to be an integral aspect of turbulence in these systems [13].However, there are few mathematically rigorous results on chaos in fluid models, even for finite dimensional models.In this paper we consider Galerkin truncations of the 2d Navier-Stokes equations on a torus of aspect ratio r > 0 and subjected to additive stochastic forcing.This system can be written as a member of the following class of stochastic differential equations (SDEs) on R d , where {e k } r k=1 are a family of constant vectors and {W k } r k=1 are independent standard Wiener processes with respect to a canonical stochastic basis (Ω, F , (F t ), P).Here A is symmetric positive definite, and B is bilinear satisfying x • B(x, x) = 0 and div B = 0 as well as the "cancellation property" B(e k , e k ) = 0 (a more general cancellation property can be taken, see [10]).Besides Galerkin truncations of the Navier-Stokes equations (see Section 1.1 below), this class includes Lorenz-96 [39], and the shell models GOY [26] and SABRA [40], all of which are observed to be chaotic for small (see e.g.discussions in [10,33,41] for Lorenz-96, e.g.[17] for GOY and SABRA, and e.g.[13] for Galerkin-Navier-Stokes).The balance of dissipation and forcing (usually called 'fluctuation-dissipation') is chosen so that there is a non-trivial limit as → 0; this balance can always be taken for small damping/large forcing regimes by a suitable re-scaling of t and x (see Remark 1.5).
For many physical models, under fairly general conditions on {e k } r k=1 it is possible to show that there is a unique stationary measure µ associated to the Markov process of (x t ) solving (1.1)(see Section 2 below).We denote the stochastic flow of diffeomorphisms defined by the solution map as x → x t =: Φ t ω (x), t ≥ 0. For SDE of the form (1.1), the stationary measures have Gaussian upper bounds (see Section 2 or [11]), and so it is possible 2 to define a top Lyapunov exponent via the limit which holds for µ × P almost every (x, ω) ∈ R d × Ω.In particular, the Lyapunov exponent λ 1 is deterministic and well-defined independent of initial condition or random noise path.When λ 1 > 0, we say that (2.2) chaotic as it shows an exponential sensitivity of the trajectory to changes in the initial condition.For deterministic systems, verifying λ 1 > 0 is notoriously difficult; see e.g. the discussions in [10] and [46,53,54].Even in the random case, there are relatively few methods.The methods à la Furstenberg (see e.g.[7,15,38,49,52]) are powerful when applicable, but are not quantitative and cannot be used to obtain λ 1 > 0 for dissipative systems.For systems with a lot of rigid structure, it is sometimes possible to obtain even asymptotic expansions of λ 1 in small noise limits; see e.g.[3,6,8,9,31,42,44,47] however, these methods generally require almost complete knowledge of the limiting → 0 dynamics and it is far from clear how these arguments could be adapted to more complicated systems such as the Galerkin-Navier-Stokes equations or even Lorenz-96.
Our recent work with Alex Blumenthal [10] puts forward a new method for obtaining lower bounds on λ 1 for SDEs.Therein, we used the method to prove that the Lorenz-96 model subject to stochastic forcing is chaotic for all sufficiently small; the Lorenz-96 model is commonly used in applied mathematics as a test case for numerical or analytical methods for high-dimensional, chaotic systems [12,33,41,43,45], but no mathematical proof of chaos had previously been found even in the stochastic case.More generally, for each SDE in the class (1.1), we formulated a sufficient condition for chaos in terms of a certain Lie algebra associated to the nonlinearity.In particular, the Lie algebraic condition of [10] implies the quantitative estimate lim →0 λ 1 ( ) = ∞, and hence ∃ 0 > 0 such that ∀ ∈ (0, 0 ) there holds λ 1 > 0.
In this paper, we first provide a convenient reformulation of the Lie algebra condition of [10], particularly amenable to application in Galerkin approximations of PDEs and other complex-valued SDEs, using basic concepts from complex geometry.Our main result is to verify this condition for the Galerkin truncations of the 2d Navier-Stokes equations with frequency cutoff N ≥ 392 on torii of any aspect ratio (Theorem 1.1), thus proving chaos for all sufficiently small.Inspired by the classical root space decomposition of semi-simple Lie algebras, we reduce the problem to proving genericity of a diagonal sub-algebra.Using the algebraic structure of the nonlinearity in Fourier space, we further reduce this question to showing that a certain list of polynomial systems have only trivial solutions.These are exhaustively verified to be inconsistent using methods from computational algebraic geometry carried out with Maple [1].Note that despite using a computer assisted proof, our results nevertheless apply in arbitrary frequency truncation and arbitrary aspect ratio and, in a certain sense, well-suited for infinite dimensions.We believe the method put forward in this paper should be applicable to other Galerkin approximations of PDEs, both real and complex valued, provided the nonlinearity is a finite-degree polynomial.

2d Galerkin-Navier-Stokes equations
Denote the torus of arbitrary side-length ratio T 2 r = [0, 2π) × [0, 2π r ) (periodized) for r > 0. Recall that the Navier-Stokes equations on T 2 r in vorticity form are given by where u is the divergence free velocity field satisfying the Bio-Savart law u = ∇ ⊥ (−∆) −1 w and Ẇt is a white-in time, colored-in-space Gaussian forcing assumed to be diagonalizable with respect to the Fourier basis.The parameter represents the kinematic viscosity; the noise has been scaled with a matching √ so that the dynamics have a non-trivial limit when → 0. For definiteness, we will assume the forcing is of the form (k,a) , W (k,b) } r k=1 are independent standard Wiener processes with respect to a canonical stochastic basis (Ω, F , (F t ), P) and that if α k = 0 then β k = 0 and vice-versa.Here we are denoting the "upper" lattice We denote the set of driving modes by Upon taking the Fourier transform one can re-write the equations in terms of the complex coefficient w k = r (2π) 2 ´T2 r e −i(x 1 k 1 +rx 2 k 2 ) w(x) dx for each k ∈ Z 2 0 and satisfies the reality constraint w −k = w k .In Fourier space, the nonlinearity B(w, w) = −u • ∇w takes the form for each where the sum is over all j, k ∈ Z 2 0 such that j + k = , the symmetrized coefficient is and we are using the notation In what follows c j,k always depends on r but we suppress the dependence for notational simplicity.
One way to deal with the reality constraint w −k = w k is to restrict the complex valued w k to the upper lattice Z 2 + and encode the values in the negative lattice Z 2 − := −Z 2 + through complex conjugation w −k = w k .In this sense we can think of the vorticity w = (w ) as belonging to the complex space C Z 2 + , and the Navier-Stokes equations is seen to be the following complex-valued evolution equation on The above formulation gives a clear method for finite-dimensional approximation, known as a Galerkin approximation.Define the truncated lattice and now simply restrict the vorticity to the truncated lattice w = (w ) ∈ C Z 2 +,N , in which case (1.3) becomes an SDE, with the sum in the non-linearity (1.2) now taken over all j, k ∈ Z 2 +,N such that j +k = ∈ Z 2 +,N .We regard the phase space as a real finite-dimensional manifold with the real and imaginary coordinates for the tangent space T w C Z 2 +,N .One can easily check that this truncation satisfies all of the hypotheses assumed for (1.1).

Main results
We will assume a general condition on K that implies there exists a unique stationary measure for all > 0 (c.f.[19,28,48]).Denote the full truncated lattice by In [19] it was shown explicitly that for r = 1 the sets K = {(0, 1), (1, 1)} and K = {(1, 0), (1, 1)} are both hypoelliptic for all N (note also that if K is hypoelliptic, then so is any K such that K ⊆ K ).In the limit N → ∞ the set of hypoelliptic forcings is easier to characterize [28], however for a fixed N we are unaware of a simple characterization of all hypoelliptic K due to the presence of the truncation.
The main theorem of this work is the following.
Before we make remarks, let us provide an outline of the remainder of the paper.In Section 2 we recall the definition of projective hypoellipticity which corresponds to Hörmander's condition for the Markov process (x t ) lifted to the sphere bundle in a suitable manner, and we recall our results with Alex Blumenthal [10] which (A) provide a useful sufficient condition for projective hypoellipticity in terms of a matrix Lie algebra based only on the nonlinearity (Proposition 2.6) and (B) show that projective hypoellipticity implies Theorem 1.1 (see Section 2.4).In Section 3, we reformulate the sufficient condition for projective hypoellipticity to be more suitable for (1.3) (Proposition 3.11).The remainder of the paper is dedicated to proving this sufficient condition.Section 4 introduces a diagonal sub-algebra h and shows that a certain genericity property of h implies projective hypoellipticity (Corollary 4.9).Section 5 proves this genericity property (Proposition 5.1).Section 5 also contains a more detailed summary of the proof of Theorem 1.1 which puts together all of the pieces.Sections 4 and 5 both use computational algebraic geometry and computer assisted proofs performed with Maple [1] to compute Gröbner bases for certain polynomial ideals (although the arguments used in Section 5 are significantly more complicated).A review of the algebraic geometry required is included for the readers' convenience in Appendix A and the computer code is included in C. Appendix B contains a simple but crucial technical lemma regarding polynomial ideals.

Remarks
Remark 1.2.We did not attempt to optimize the proof to try and reduce the value of N and we believe that the result holds for much smaller N as well.However, N ≥ 392 is already enough to treat nearly all modern numerical simulations of the 2d Navier-Stokes equations on T 2 r .
Remark 1.3.As might be expected, we currently do not have any quantitative estimates on 0 and λ 1 in terms of N at this time.
Remark 1.4.The quantitative estimate of λ 1 in terms of is almost certainly sub-optimal Remark 1.5.If one starts with the scaling we can relate the stochastic flow of diffeomorphisms Φt ω solving the SDE (1.5) with the stochastic flow Φ t ω solving (1.1) by Φ where ω t = ˆ −1/4 ω√ ˆ t a Brownian self-similar rescaling of the noise path ω so equality of the two flows is interpreted as equality in probabilistic law ).Thus, the Lyapunov exponent λˆ 1 of the stochastic flow Φt ω satisfies ˆ −1 λˆ 1 = −1 λ 1 , and in particular λˆ 1 > 0 if and only if λ 1 > 0.
Remark 1.6.We believe our methods should extend in an analogous way to Galerkin truncations of other PDE with polynomial nonlinearities, for example the 3D Navier-Stokes equations, as well as more general truncations like the Fourier decimation models in e.g.[23].
Remark 1.7.Another important truncation of the Euler non-linearity is the Zeitlin model (see [25,55,56]), which has the added benefit that it preserves the Poisson structure of the Euler equations for the co-adjoint orbits on SDiff(T 2 ) (see [4]).In this approximation, instead of a sharp truncation in frequency, the Fourier modes are taken to belong to the periodic lattice Z 2 0,mod N := Z 2 0 \N Z 2 0 for some N ≥ 1 and the non-linearity is given by While we expect that our results should still hold for this model, it is important to note that our methods do not currently apply to this truncation since the multiplier sin(2π j ⊥ , k /N ) is not a polynomial in the lattice variables j, k and therefore cannot be easily treated by our algebraic geometry methods.
Remark 1.8.The use of computational algebraic geometry methods to deduce generating properties of matrix Lie algebras is not an entirely new idea.For instance, computations using polynomial ideals and Gröbner bases feature in [20] as a practical tool for deducing transitivity on R n for certain matrix algebras related to bilinear control systems.However, in contrast with our work, the techniques used in [20] depend very strongly on the dimension and do not generalize to infinite or arbitrary dimensional systems like ours.
Remark 1.9.Our computer assisted proof uses Maple's implementation of the F4 algorithm [21] to compute the reduced Gröbner basis (see [16] or Appendix A) of certain polynomial ideals associated with the coefficient c j,k .Computing Gröbner bases, particularly for ideals generated by high degree polynomials with many variables, can be notoriously costly and can be very sensitive to the choice of variable ordering and associated monomial ordering.It is important to remark that the set up of several of the computations included in Appendix C are incredibly delicate, and often fail to converge if some of the constraints aren't included, the variable ordering isn't chosen correctly, or if the choice of saturating polynomial isn't written in a certain way.Indeed, the principle part of the calculation takes places on polynomials of degree 19 in 11 independent variables, which is a far too high dimensional space in which to do arbitrary computations, even for modern supercomputers.
Remark 1.10.A remarkable feature of our proof is that it holds for all N large enough and for all torus aspect ratios r > 0. Such a conclusion is simply not possible using more direct methods.Specifically, for a given fixed N and fixed r > 0, one can of course attempt to check the matrix algebra generating properties exhaustively using a more direct method, however computing this very quickly becomes extremely expensive for even fairly modest N ≥ 10 and can only be done using exact rational arithmetic if r is a rational number.

Acknowledgements
We would like to give a special mention to Alex Blumenthal for many fruitful discussions and whose work in [10] laid the foundation for this one.

Preliminaries: projective hypoellipticity and chaos
In this section we review the concepts of hypoellipticity for Markov semigroups, the projective process and its hypoellipticity, and the main results of [10] connecting projective hypoellipticity to chaos as well as some convenient characterizations of projective hypoellipticity.

Hypoellipticity
In this section we briefly recall the notion of hypoellipticity and its relevance to SDEs.
In what follows, let (M, g) be a smooth, connected, complete Riemannian manifold without boundary and X(M ), the space of smooth vector fields over M .Let [X, Y ] be the Lie bracket of two vector fields X, Y , defined for each where X(f ) denotes the directional derivative of f in the direction X.This bracket turns X(M ) into an infinite dimensional Lie algebra.Denote the adjoint action ad(X) : X(M ) → X(M ) by ad(X)Y = [X, Y ].The next condition, introduced in [29], is crucial to our study.where Lie m (F) := span{ad(X r ) . . .ad(X 2 )X 1 : We say that a collection of smooth vector fields F ⊆ X(M ) satisfies Hörmander's condition on M if for each x ∈ M we have the following spanning property It is also useful to define a notion of (locally) uniform spanning properties of a collection of vector fields.Definition 2.2 (Uniform Hörmander).Let F ⊂ X(M ) be a set of vector fields parameterized by ∈ (0, 1].We say F satisfies the uniform Hörmander condition on M if ∃m ∈ N, such that for any open, bounded set U ⊆ M there exists constants {K n } ∞ n=0 , such that for all ∈ (0, 1] and all x ∈ U , there is a finite subset An important role will also be played by a certain Lie algebra ideal which is better suited to hypoellipticity for parabolic equations and Markov semigroups.Definition 2.3 (Parabolic Hörmander's condition).Let X 0 ∈ X(M ) be a distinguished "drift" vector field and let X ⊆ X(M ) be of a collection of "noise" vector fields.We define the zero-time ideal generated by X 0 and X as the Lie algebra generated by the sets X and [X , X 0 ] := {[X, X 0 ] : X ∈ X }, which we denote by Lie(X 0 ; X ) := Lie(X , [X , X 0 ]).
Correspondingly we say that the vector fields X 0 , X satisfy the parabolic Hörmander condition on M if the vector fields Lie x (X 0 ; X ) = T x M .Likewise we say X 0 ,X satisfies the uniform parabolic Hörmander condition if {X , [X , X 0 ]} satisfies the uniform Hörmander condition as in Definition 2.2.
Remark 2.4.The terminology 'zero-time ideal' comes from geometric control theory (see e.g.[32]) where Lie(X 0 ; X ) plays an important role in obtaining exact controllability of affine control systems.A proof that the definition of the zero-time ideal in geometric control theory and Lie(X 0 ; X ) coincide can be found in [Proposition 5.10, [10]], although this fact is likely well-known to experts in geometric control theory (see e.g.discussion in [20] Chapter 3).
Consider a stochastic process x t ∈ M, t ≥ 0 defined by the (Stratonovich) SDE for vector fields X k ∈ X(M ).Define the Markov kernel for any set O ⊂ M and x ∈ M , P t (x, O) = P(x t ∈ O | x 0 = x) and define the Markov semigroup on where ϕ : M → R is bounded and measurable.We also define the adjoint semigroup on probability measures P(M ) for each Borel A ⊂ M and µ ∈ P(M ) Under fairly mild conditions on the vector fields {X k } r k=0 , these Markov semigroups are well defined and solve deterministic PDEs [2,37].Recall the definition of stationary measure for an SDE.Definition 2.5.A measure µ ∈ P(M ) is called stationary for a given SDE if P * t µ = µ.
Hörmander's theorem implies that if X 0 , {X 1 , ..., X r } satisfies the parabolic Hörmander condition, then P t : L ∞ → C ∞ (see e.g.[27,29]).This implies that any stationary measure µ is absolutely continuous with respect to Lebesgue measure with a smooth density.By the Doob-Khasminskii theorem, this together with topological irreducibility3 implies the uniqueness of stationary measures.

Projective Hypoellipticity
It is well-known that many dynamical properties of the general SDE (2.2) are encoded in the process The process (z t ) takes values on the unit tangent bundle SM defined by the fibers S x M = S n−1 (T x M ) and is called the projective process (as one can just as well consider the process on the projective bundle P M ).One can show that z t solves the lifted version of (2.2) on SM Here, for a smooth vector field X on M , define the "lifted" vector field X on SM by where each of the components in the block vector above is determined via the orthogonal splitting x M into horizontal and vertical components induced by the Levi-Civita connection ∇ on M and the associated Sasaki metric4 g on SM .The "vertical" component V ∇X will be referred to as the projective vector field and is defined explicitly by where ∇X(x) denotes the total covariant derivative of X, viewed as a linear endomorphism on T x M .That there should be a connection between the hypoellipticity of the projective process and the Lyapunov exponents is well-documented (see e.g.[7,18,47]).Indeed, the sufficient condition proved in [10] for (1.4) in systems of the form (1.1) is the requirement of uniform hypoellipticity of the (z t ) process, i.e. projective hypoellipticity, which we explain next.
Here we recall necessary and sufficient conditions on a collection of vector fields F ⊆ X(M ) so that their lifts F = { X : X ∈ F} ⊆ X(SM ) satisfy the Hörmander condition on SM .Since the vector fields F may not be volume preserving, it is convenient to define for each X ∈ X(M ) and x ∈ M the following traceless linear operator on T x M : Id , which we view as an element of the Lie algebra sl(T x M ) of linear endomorphisms A with tr(A) = 0 and Lie bracket given by the commutator [A, B] = AB − BA.Since the projective vector field V ∇X (v) includes a projection orthogonal to v, we always have V ∇X = V M X .For each x ∈ M , an important role will be played by the following Lie sub-algebra of sl(T x M ) Note that m x (F) is independent of any choice of coordinates (and is in fact independent of the choice of metric).One can further check that m x (F) is indeed a Lie sub-algebra of sl(T x M ).
The spanning properties of the lifted vector fields F on SM can be related to properties of the Lie algebra m x (F).An important role is played by the non-trivial fact that the lifting map X → X satisfies the identity and therefore is a Lie algebra isomorphism5 onto the set of lifts The associated implications for projective hypoellipticity of the lifts are conveniently recorded in the following from [10].
Proposition 2.6 (Proposition 2.7 in [10]).Let F ⊆ X(M ) be a collection of smooth vector fields on M .Their lifts F ⊆ X(M ) satisfy the Hörmander condition on SM if and only if F satisfies the Hörmander condition on M and for each x ∈ M , m x (F) acts transitively on S x M in the sense that for each (x, v) ∈ SM , one has In particular this implies that F ⊆ X(M ) satisfies Hörmander's condition if for each x ∈ M Lie x (F) = T x M, and m x (F) = sl(T x M ).
Remark 2.7.In general one should expect that "generically" m x (F) = sl(T x M ) holds true.Indeed, it well-known in the control theory literature (see [14]) that there is an open and dense set of sl n (R) such that any two matrices A, B in that set generate sl n (R).

Chaos and Fisher Information
In this section we briefly recall some of the main results of [10] for the readers' convenience.For this we have to define the sum Lyapunov exponent, which describes the asymptotic exponential rate of the volume compression/expansion: With some additional mild integrability (see [10,34] for discussions) the Kingman subadditive ergodic theorem [35,36] implies that a unique stationary measure leads to uniquely defined λ 1 , λ Σ attained for µ × P a.e.(x, ω).
For general SDE of the form (2.2), with Alex Blumenthal, we provided the following identity connecting a degenerate Fisher information-type quantity with the Lyapunov exponent.
Proposition 2.8 (Proposition 3.2, [10]).Assume that the SDE (2.2) defines a global-in-time stochastic flow of C 1 diffeomorphisms and that the associated projective process (z t ) has a unique stationary measure ν which is absolutely continuous with respect to the volume measure dq on SM with smooth density f and which satisfies some additional mild decay and integrability estimates (see [10] for details).Then, where n is the dimension of M and dq the Riemannian volume measure on SM , and X * k denotes the formal adjoint of X k as a differential operator with respect to L 2 (dq).Remark 2.9.A sharper version of the identity holds on the conditional measures with nλ 1 −λ Σ on the right-hand side, providing a time-infinitesimal analogue of relative entropy inequalities studied in e.g.[7,24,38]; see [10] for details.
In [10] we proved the following crucial uniform Hörmander-type lower bound on the Fisher information, connecting regularity in W s,1 of f to the Fisher information and therefore the Lyapunov exponents.
Theorem 2.10 (Theorem 4.2, [10]).Consider the SDE (2.2) for vector fields {X 0 , ..., X r } parameterized by ∈ (0, 1].Suppose that { X 0 , X 1 , ..., X r } satisfies the uniform Hörmander condition on SM and suppose that for all ∈ (0, 1] there exists a unique stationary measure ν with smooth density f for the associated projective process (z t ).Then ∃s ∈ (0, 1) such that ∀U ⊂ SM open geodesic ball, ∃C U > 0 such that ∀ ∈ (0, 1] there holds where χ U is a smooth cutoff function equal to 1 inside U and outside of a slightly larger ball U .Note both s and C U are independent of .
The above two results give a clear path towards estimating Lyapunov exponents from below if lower bounds on the regularity of f can be obtained.

Application to the Galerkin-Navier-Stokes equations
In the context of the SDE in the specific class (1.1), one can prove the following using standard methods.As discussed in Section 1.1, the Galerkin-Navier-Stokes equations written in Fourier variables and when phase space is interpreted through the real and imaginary parts as R d , the following theorem applies.
Theorem 2.11 (See [10] or [11]).Let X 0 (x) = B(x, x) − Ax and consider the class of SDE (1.1).These SDEs each generate families of global-in-time, smooth stochastic diffeomorphisms Φ t ω , and if {X 0 , X 1 , ...X r } satisfies the parabolic Hörmander condition, then for all > 0, there exists a unique stationary measure µ with a smooth, density ρ which satisfies a pointwise Gaussian upper bound.Moreover, there exists a top Lyapunov exponent λ 1 ( ) ∈ R and a sum Lyapunov exponent λ Σ ( ) such that the following limit holds µ × P almost-surely Remark 2.12.In fact, if X 0 , {X 1 , ...X r } satisfies the uniform parabolic Hörmander's condition, then one can prove the pointwise Gaussian upper bound on ρ uniformly in , as well as a uniformin-strictly positive lower bound on all compact sets [11].
The main theorem of this paper is a description of the Lie algebra m x (T x M ) for the 2d Galerkin-Navier-Stokes equations (1.3).Theorem 2.13.Let N ≥ 392, let X 0 (w) = B(w, w) + ∆w be the Galerkin Navier-Stokes vector field over M = C Z 2 +,N , and let and in particular, from Proposition 2.6, X 0 , X satisfies the uniform-in-parabolic Hörmander condition on SM .
The majority of the paper is spent proving Theorem 2.13; see Section 5 for a summary of how the pieces fit together in the proof.Next, we briefly summarize next why Theorem 2.13 implies Theorem 1.1 from the results of [10].
The following lemma is a consequence of Hörmander's theorem, Doob-Khasminskii's theorem, and geometric control theory; see [10] for details.Lemma 2.14 (Theorem B.1, [10]).Let X 0 (x) = B(x, x) − Ax and consider the class of SDE (1.1) (with the corresponding conditions assumed on B).Suppose that the lifts X 0 , X satisfies the uniform parabolic Hörmander condition on SM .Then, ∀ ∈ (0, ∞), there exists a unique stationary measure ν for the associated projective process with a smooth, strictly positive density f with respect to Lebesgue measure such that f log f ∈ L 1 and ∃C, γ > 0 such that ∀ ∈ (0, 1], ˆSM f e γ|x| 2 dq < C, and ∀N > 0, the following moment bound holds ∀ ∈ (0, 1] (not uniformly in N or ), In view of the above, Proposition 2.8 gives the following for (1.1) (assuming projective hypoellipticity), Theorem 2.10 then implies there exists an s ∈ (0, 1) such that for every bounded open set U ⊆ SM we have Therefore, if λ 1 / were to remain bounded, one can show that {f } >0 is precompact in L p for all p ≥ 1 sufficiently small and so there is a strongly convergent subsequence f n → f ∈ L 1 which is an absolutely continuous stationary density for the = 0 limiting deterministic projective process [Proposition 6.1, [10]].
Projective hypoellipticity played the crucial role in reducing the estimate on the Lyapunov estimate to one of regularity of f , and for the estimate (1.4) to whether or not there can exist an invariant measure with an L 1 density for the deterministic = 0 projective process.That no such invariant density can exist for any model of the form (1.1) was proved in [Proposition 6.2 [10]], and therefore λ 1 / → ∞.The major additional ingredient used in this step is that the = 0 Jacobian D x Φ t grows unboundedly as t → ∞ for a.e.initial condition x, necessitating concentrations in any invariant measures (see [10] for details).This is deduced using the special structure of the nonlinearity and for other models may not be straightforward to verify.
To summarize, to prove Theorem 1.1, it suffices only to verify that { X 0 , X 1 , ..., X r } satisfies the uniform-in-parabolic Hörmander condition, i.e.Theorem 2.13.The remainder of the paper is dedicated to the proving of this result, which as detailed in the next section, is a purely algebraic question.
Remark 2.15.For the specific case of Navier-Stokes with additive stochastic forcing, the Fisher information becomes 3 Projective hypoellipticity on complex geometries

Real vs complex spanning
Treating the phase space C Z 2 +,N as a real manifold using the real and imaginary parts can be awkward and lead to very cumbersome calculations.Due to the convenience of the Fourier description when dealing with Galerkin truncations of PDEs, it makes more sense to find a natural, complex way to view phase space.Specifically, if we have a complex phase space C n , we should treat it as a complex manifold and complexify the tangent space.First, we review some of the basic concepts from complex geometry for the readers' convenience (see e.g.[Chapter 1, [30]]) and explain how the ideas apply to hypoellipticity of stochastic PDEs, providing a cleaner proof of the spanning condition for Galerkin Navier-Stokes obtained in [19].Finally, we explain how the ideas extend to the question of projective hypoellipticity and formulate the sufficient condition which occupies the rest of the paper.Definition 3.1.Given a real vector space V , define its complexification by We begin by noting the following simple, but crucial equivalence between complex and real spanning of a collection of vectors in a real vector space.Lemma 3.2.Let V be a real vector space and let V ⊗ C be it's complexification.For a given collection of vectors {v k } ⊂ V , we have where span C denotes the span of a collection of vectors using complex coefficients.
Proof.Real spanning of V implies complexified spanning since one can span the real and imaginary parts separately.For the converse, suppose that (3.1) holds.This means that for any v ∈ V there exist

Hörmanders condition on C n
Now we turn to the space C n , where complexification of the tangent space is natural and most useful.Let X be a smooth vector field over C n , where C n is viewed as a real manifold with real tangent space T C n spanned by the coordinate vectors ∂ a k , ∂ b k , corresponding to the real and imaginary parts respectively.Clearly, T C n is isomorphic as a vector space to C n and therefore we may view each X as a mapping In what follows we will complexify the tangent space T C n ⊗ C and define complex basis vectors This naturally induces the splitting , known as the holomorphic and anti-holomorphic bundles respectively (see [30]).In this new basis, we see that (3.2) becomes Recall that the Lie bracket [ • , • ] is coordinate independent and does not depend on the choice of basis and so neither does Lie(F) for some collection F ⊆ X(C n ).Given a collection F ⊂ X(C n ), let Lie(F) C be the complexification of Lie(F) (obtained by replacing span with span C in the definition (2.1)).We now have the following simple corollary of Lemma 3.1 regarding spanning for Lie x (F) Remark 3.4.The same proof also applies to any subalgebra of Lie(F), for instance the Lie algebra ideal Lie(X 0 ; F) with respect to a distinguished drift vector field X 0 .
Remark 3.5.Lemma 3.3 means from a practical perspective that in order to check Hörmander's condition for a collection of vector fields on C n , it is sufficient to take complex linear combinations and attempt to isolate ∂ z k and ∂ z k separately in order to span both T 1,0 C n and T 0,1 C n .

Application: hypoellipticity for the stochastic Navier-Stokes equations
In this section we show how the complexification procedure above allows us to give a cleaner proof of Hörmander's condition for the Navier-Stokes equations with additive stochastic forcing in 2d, first identified in [19] and expanded upon in [28].Recall from Section 1.1, we can formulate the 2d stochastic Galerkin-Navier-Stokes equations as an SDE on with the sum over all j, k ∈ Z 2 0,N such that j + k = .Due the reality constraint w − = w we find it convenient to index the basis vectors on the full lattice Z 2 0 via , where Z 2 − = −Z 2 + .Combining this with the reality constraint on B − (w, w) = B (w, w), we can write X 0 in a more succinct notation involving a sum over the full lattice We note that for any w ∈ C Z 2 0,N , satisfying w − = w , that ∂ w as defined above has the property that for each , i ∈ Z 2 0,N , ∂ w behave as Wirtinger derivatives, satisfying From this, we can easily obtain simple expressions for the brackets Our goal is to prove the following: Then Lie(X 0 ; X ) C contains the constant vector fields {∂ w k : k ∈ Z 2 0,N } and moreover, it follows from Lemma 3.3 that X 0 , X satisfies the uniform parabolic Hörmander condition on C Z 2 +,N viewed as a real manifold.
Proof.Since for a given ∈ Z 2 +,N , ∂ w and ∂ w − = ∂ w are complex linear combinations of ∂ a and ∂ b for ∈ Z 2 +,N it suffices to take brackets with respect to ∂ w for all as this is independent of , it is clear that spanning will imply uniform spanning.
It becomes clear we need the following iteration, defining By Assumption 1, this iteration continues to generate all of Z 2 0,N which implies that {∂

Projective spanning on C n
When the manifold is C n we will also find it useful to complexify the tangent space to show projective hypoellipticity.Let V be a real vector space and recall that for a given vector space W (real or complex) the space sl(W ) is the Lie algebra of linear endomorphisms H of W with tr H = 0 (note this is independent of basis) and Lie bracket given by the commutator Note that any endomorphism H of V can be trivially extended to an endomorphism of the complexification V ⊗ C via H(v 1 + iv 2 ) = Hv 1 + iHv 2 , moreover any G ∈ sl(V ⊗ C) can be written as G = G 1 + iG 2 , where G 1 , G 2 ∈ sl(V ), so that we have sl(V ⊗ C) = sl(V ) ⊗ C, i.e. sl(V ) is a real form for sl(V ⊗ C).We denote the Lie algebra of endomorphisms generated by any collection Likewise, define Lie(H) C as above with span C and H extended to sl(V ⊗ C).The next result follows easily from Lemma 3.2 and the bilinearity of X, Y → ad(X)Y .
Proposition 3.7.Let V be a real vector space and H ⊆ sl(V ), then Lie(H) = sl(V ) if and only if In light of the linearity of the mapping X → M X (z) we have the following property of the Lie algebra of endomorphisms induced by Lie(F) C for some collection F ∈ X(C n ).
In particular, the lifts F satisfies Hörmander's condition on SC n if m z (F) C = sl(T z C n ) ⊗ C and F satisfies Hörmander's condition on C n .Remark 3.9.Corollary 3.8 is useful in the sense that it allows one to work directly with m x (X 0 ; F) C therefore consider matrices ∇X(x) in ∂ z k , ∂ z k coordinates, which often take a much simpler form than their counterparts in ∂ a k , ∂ b k coordinates.

A sufficient condition for projective hypoellipticity for Navier-Stokes
In this section, we consider a sufficient condition for projective hypoellipticity for the Navier-Stokes equation in terms of a real matrix Lie algebra obtained by working in complex coordinates.These matrices take on a particularly simple form that allow the problem to be made much more tractable which is crucial for the arguments that follow.
Following the set-up of section 3.3, we define the Navier-Stokes vector field on the complexified tangent space where we recall that w − = w and that we have defined for ∈ Z 2 0 As in the set up of Proposition 3.6, we assume that we have vector fields {∂ w k } k∈K∪−K , where K ⊂ Z 2 0,N generates Z 2 0,N in the sense of Assumption 1.By Proposition 3.6, we have that Lie(X 0 ; {∂ w k } k∈K ) C , contains the constant vector fields {∂ w k } k∈Z 2 0,N , and therefore any vector field X ∈ Lie(X 0 ; {∂ w k } k∈K ) C can always be shifted by a constant vector field X = X − X(z) so that X(z) = 0 and ∇ X = ∇X.Additionally, by Corollary 3.8, and the fact that B(w, w) is bilinear and ∆w is linear, this implies that for each k ∈ Z 2 0,N , the endomorphism belongs to m w (X 0 ; {∂ w k } k∈K ) C .Moreover due to the bilinear nature of B(w, w), each H k is constant and independent of .The following Lemma gives an explicit matrix representation of H k in ∂ w k coordinates as a |Z 2 0,N | = (2N + 1) 2 − 1 dimensional square matrix indexed over Z 2 0,N .This simple form comes from the convenient form of the nonlinearity in complex variables (see Section 1.1).Lemma 3.10.For each k ∈ Z 2 0,N we have the following formula for H k in ∂ w coordinates by Note that in {∂ w k } coordinates, the matrices H k are real matrices, and therefore we only need them to generate an appropriate Lie algebra of real matrices in order for them to span the complexified space by Corollary 3.8.
Below we record a sufficient condition for projective spanning in the Galerkin-Navier-Stokes system in terms of the Lie algebra generated by the H k matrices.
where we use the notation sl Z 2 0,N (R) to denote the Lie algebra of real, trace-free matrices indexed by Z 2 0,N .
Proof.By the above discussion regarding Proposition 3.6, we have that {H k } viewed as linear endomorphisms satisfy If the corresponding real matrix Lie algebra Lie(

A dinstinctness condition in the diagonal algebra
In order to show that Lie( for Proposition 3.11, a special role will be played by a certain diagonal subalgebra h.Genericity properties of elements of this algebra, specifically related to distinctness of certain differences of diagonal elements, will play a crucial role in our ability to isolate elementary matrices, which is particularly challenging for the Navier-Stokes equations due to the non-local frequency interactions made explicit by the banded structure of the H k matrices.

An illustrative example
Taking a page from the classical root space decomposition of semi-simple Lie algebras, we will make use of a strategy that utilizes the fact that elementary matrices are left invariant by adjoint action with a diagonal matrix.To fix ideas, we will first consider an idealized situation.Let D be any diagonal matrix in sl n (R) with diagonal entries D ii denoted by D i .It is well known and easily verifiable that for any elementary matrix E i,j = δ ij for the Kronecker delta (i.e. a matrix with a one in the ith row and jth column and zero elsewhere) one has and therefore E i,j is an eigenvector of the operator ad(D) with eigenvalue D i − D j .This means that if D is suitably generic in the sense that it's diagonal entries have distinct differences then the operator ad(D) has simple eigenvalues.Such a distinctness property and associated simplicity of the spectrum gives a clear strategy for spanning sets of elementary matrices that generate sl n (R) using an approach similar to Krylov subspace methods for generating sets of linearly independent eigenvectors [5,51].Specifically we have the following.
Proposition 4.1.Let H be a matrix in sl n (R) whose diagonal entries are zero H ii = 0, and with at least one non-zero element away from the diagonal, Suppose, in addition, that there is a diagonal matrix D whose diagonal entries D i = D ii satisfy for each (i, j), (i , j ) ∈ supp(H), (i, j) = (i , j ).
Then for N = |supp(H)|, span{H, ad(D)H, ad(D) 2 H . . ., ad(D) N −1 H}, contains the elementary matrices E i,j for each i, j ∈ supp(H). Proof.Write H ij E ij and λ ij = D i − D j be the corresponding eigenvalue of ad(D).The Krylov subspace in question becomes span The linear independence of these vectors reduces to the invertibility of the Vandermonde matrix which follows by the assumption that λ ij = λ i j for (i, j) = (i , j ).Hence, due to the full rank, the Krylov subspace coincides with span E ij : H ij = 0 .
Remark 4.2.In the context of Lemma 4.1, it is important to note that one does not necessarily need H to have all it's entries non-zero in order to show that Lie(D, H) = sl n (R).Indeed, a relatively small number of elementary matrices can easily generate sl n (R).For instance it is readily seen that the elementary matrices are sufficient to generate sl n (R).

The diagonal subalgebra h
The set of matrices {H k } defined in (3.3) does not contain any diagonal matrices, however, by commuting H k and H −k , we obtain a diagonal algebra which we denote Lemma 4.3.For each k ∈ Z 2 0,N , we have is a diagonal matrix with diagonal entries for each i ∈ Z 2 0,N given by and therefore h = span{D k } is a commutative Lie sub-algebra of sl Z 2 0,N (R).
Remark 4.4.It is important to note that the truncated lattice Z 2 0,N actually makes the form of D k i more complicated.Depending on the choice of k, the indicator functions 1 Z 2 0,N (i + k) and 1 Z 2 0,N (i − k) have non-trivial regions where they overlap and don't overlap, leading to significant complications in proofs that utilize computational algebra.A remarkable fact is that the "infinite dimensional" case obtained by replacing Z 2 0,N with the full lattice Z 2 0 actually gives the much cleaner form making it much more amenable to algebraic methods.
We would like to use the diagonal matrices in h to proceed as Section 4.1, however, the situation here is far more delicate than that presented in Proposition 4.1 due to the fact that D k i has an inversion symmetry D k −i = −D k i , which fundamentally restricts the possibility of having distinct differences.In particular, for any given diagonal matrix D satisfying D −i = −D i , the adjoint operator ad(D) : is incapable of having simple spectrum since the odd symmetry of D i implies that there are always two dimensional invariant spaces associated to the adjoint operator.Specifically we see that for each i, j ∈ Z 2 0,N , i = j ad(D)E i,j = (D i − D j )E i,j and ad(D)E −j,−i = (D i − D j )E −j,−i and therefore the eigenvalue D i − D j for ad(D)always has multiplicity at least 2 with the invariant space span{E i,j , E −j,i }.
With this in mind, it is convenient to write H k as a linear combination of such matrices.In particular we can write for each k ∈ Z 2 0,N Taking into account the sparsity of H k and the fact that for any diagonal matrix D satisfying for each k, i, i , i − k, i − k ∈ Z 2 0,N with i = i and i = k − i , then a similar procedure to the one carried out in Proposition 4.1 implies that under the distinctness condition (4.1), if D belongs to Lie({H k }), then Lie({H k }) also contains the following sets of matrices for each k ∈ Z 2 0,N and i, i By relabeling indices and eliminating k, this means that we can obtain matrices of the form Similarly, in the distinctness condition (4.1) we can eliminate k, and reduce this to a more symmetric constraint of the form with the constraints In general we have the following convenient reformulation of the distinctness condition (4.1).
Definition 4.5 (Distinct).We say a diagonal matrix D ∈ h is distinct if for every (i, j, , m) ∈ C N with i + j + + m = 0 we have Under this new definition, we can summarize the above discussion as follows.
Lemma 4.7.Suppose that h contains a distinct diagonal matrix in the sense of Definition 4.5.
It turns out that this set of matrices M i,j , each one being comprised of linear combination of pairs of elementary matrices, is sufficient to generate all of sl Z 2 0,N (R).The proof of this fact is the content of the following subsection.
As a simple corollary of this and Lemma 4.7 this reduces Proposition 3.11 to a condition on the existence of a distinct matrix inside h.

Proof of Proposition 4.8
To show Proposition 4.8 we first assume an algebraic property of the coefficients c j,k , which we will prove in Proposition 4.11 using techniques from computational algebraic geometry.To simplify notation in what follows, denote Lemma 4.10.Suppose that for each i, j ∈ Z 2 0,N , with i − j ∈ Z 2 0,1 , there exists a k, k ∈ S i ∩ S j such that Proof.If we take commutators of matrices of the form [M i,k , M k,j ], where i − j ∈ Z 2 0,1 and k ∈ S i ∩ S j , we have Therefore if we pick any two k, k ∈ S i ∩ S j , we obtain a 2 × 2 linear system for E i,j and E −j,−i We can write E i,j and E −j,−i as a linear combination of [M i,k , M k,j ] and [M i,k , M k ,j ] provided that for each i, j ∈ Z 2 0 , with i − j ∈ Z 2 0,1 we can find a k, k ∈ S i ∩ S j such that This is true by assumption and therefore we can solve the linear system (4.4) and obtain all elementary matrices E i,j for i, j ∈ Z 2 0,N , with i − j ∈ Z 2 0,1 .One can easily check that all such elementary matrices generate sl Z 2 0,N (R) (see Remark 4.2).
Note that the property that d k,k i,j = 0 is a purely algebraic one.In particular, suppose by contradiction, that there exists an i, j ∈ Z 2 0,N with i − j ∈ Z 2 0,1 such that That is then i, j must solve a set of rational equations (with integer coefficients), one for each pair (k, k ) ∈ (S i ∩ S j ) 2 .Next, we show that this system of rational equations is algebraically inconsistent.We will prove the following Proposition using machinery from computational algebraic geometry, which we review in Appendix A. The computations are done using Maple; see Appendix C.1 for the computer code.
Proposition 4.11.For each i, j ∈ Z 2 0,N with i − j ∈ Z 2 0,1 , there exists k, k ∈ S i ∩ S j such that d k,k i,j = 0.
Proof.To prove this, we first note that d(i, j, k, k , r) = d k,k i,j is a purely rational algebraic function of the variables i = (i 1 , i 2 ), j = (j 1 , j 2 ), k = (k 1 , k 2 ), k = (k 1 , k 2 ) and r.We denote the numerator by P (i, j, k, k , r) = numer(d(i, j, k, k , r)).
Suppose by contradiction that there exists an i, j ∈ Z 2 0,N with i−j ∈ Z 2 0,1 such that d(i, j, k, k , r) = 0 for all k, k ∈ S i ∩ S j .Then we have Note that this polynomial is degree 10 in k, k .If N ≥ 8 then since i − j ∈ Z 2 0,1 , S i ∩ S j always contains Z 2 0,6 and Lemma B.1 implies that the collection of polynomials in i, j, r defined by the coefficients of the polynomial in must also vanish (note that the coefficients f 1 , ..., f s are polynomials in (i 1 , i 2 , j 1 , j 2 ) and r with integer coefficients).By extending the variables i 1 , i 2 , j 1 , j 2 and r, to the algebraically closed field C, we define the polynomial ideal generated by {f 1 , . . ., f s } (see Appendix A for a review of the relevant algebraic geometry).Next we define the constraint polynomial g(i, j, r) = r 2 |i| 2 r |j| 2 r |i − j| 2 r .Note that on C 5 , we have g = 0 exactly encodes the constraint that i = 0, j = 0, i = j, r = 0.In light of this, our goal then is to show that affine varieties induced by I and g are the same since this implies that the only common zeros of {f 1 , . . ., f s } in C 5 are those with i = 0, j = 0, i = j or r = 0.By the strong Nullstellensatz [Ch 4, Theorem 10 [16]] (see also Theorem A.11 below), this is true if and only if there exists an n ∈ Z ≥0 such that g n ∈ I, or equivalently by Theorem A.12, if the reduced Gröbner basis of the saturation I : g ∞ for any given monomial ordering is {1} (see Appendix A.2 for background on saturation).To compute a saturation with respect to a single polynomial, it suffices to introduce an extra variable z to represent 1/g and consider the augmented ideal By Theorem A.12, if {1} is the reduced Gröbner basis for Ĩ, then it is also the reduced Gröbner basis for I : g ∞ .We use Maple [1] to compute the reduced Gröbner basis G for the ideal Ĩ.This computation is done in graded reverse lexicographical order (or "grevlex") and the variable ordering using an implementation of the F4 algorithm [22]; see Appendix C. The result is G = {1}, thereby concluding the proof.

Verifying distinctness in the diagonal algebra
So far we have shown that if h contains a distinct matrix D in the sense of Definition 4.5 then Lie({H k }) = sl Z 2 0,N (R), which implies projective spanning by Proposition 3.11.The goal of this section is to show that h does contain many distinct matrices, in fact, they are 'generic' in the sense that they form an open and dense set in h.Unfortunately, each individual diagonal matrix D k = [H k , H −k ] is certainly not distinct, since there are many degeneracies related to each particular k.However, we have the benefit of a large number of such diagonal matrices and can take linear combinations of each D k to find a distinct matrix.Specifically taking linear combinations allows one to reduce the condition for the distinctness condition (4.3) to one that is much more mild on the entire collection {D k }.
Indeed, the main result of this section, and the main effort of proof is to show the following sufficient condition on the collection {D k }.Proposition 5.1.For each (i, j, , m) ∈ C N (defined in (4.2)) with i + j + + m = 0, there exists a k ∈ Z 2 0,N such that We now show how proposition 5.1 implies that "most" elements in the span of {D k } are in fact distinct.
Lemma 5.2.Assume the result of Proposition 5.1 holds, then there exists an open and dense set of matrices in h = span{D k : k ∈ Z 2 0,N } that are distinct in the sense of definition 4.5.
Proof.For each fixed (i, j, , m) ∈ C N , we denote the vector and for each (i, j, , m) ∈ C N , let By Proposition 5.1, w (i,j, ,m) is always a non-zero vector and hence Γ (i,j, ,m) is an open-dense set (being the complement of a plane).Since C N is a finite set is also open and dense in R Z 2 0,N .It follows that for any α ∈ Γ, the linear combination k α k D k , is distinct.
We now briefly summarize how Proposition 5.1 completes the proof of the main results of our paper.
Proof of Theorems 2.13 and 1.1.Proposition 5.1 together with Lemma 5.2 imply that h contains a distinct matrix in the sense of definition 4.5.Then, Corollary 4.9 and Proposition 3.11 implies projective hypoellipticity for the Naver-Stokes equations, i.e.Theorem 2.13, from which Theorem 1.1 follows by the results of [Theorem C; [10]]; see Sections 2.3 and 2.4 for more information.

The simpler "infinite dimensional" case
Before continuing with the proof of proposition 5.1, which is rather technical in nature due to the presence of the Galerkin truncation, it is very instructive to first see how the proof goes in the "infinite dimensional" case when the Galerkin truncation is removed and we instead consider the entire lattice Z 2 0 .The actual proof is similar in spirit to the one presented below, just repeated 35 times to cover various edge cases.The proof in this section has an accompanying Maple worksheet that will do the algebraic computations and compute the reduced Gröbner bases using exact arithmetic; see Appendix C.2.
The proof full proof of Proposition 5.1 will make use of the algebraic structure of D(i, k, r) = D k i (recall r dependence is implicit) as a piecewise defined rational function on (Z 2 0,N ) 2 .The overall goal is to show that for each side length r = 0 that there do not exist any solutions (i, j, , m) ∈ C N with i + j + + m = 0, to the set of Diophantine equations As mentioned in Remark 4.4 without the Galerkin cut-off D(i, k, r) takes a much simpler rational algebraic form that is not piecewise defined on the lattice, In light of this, our strategy is to extend the rational function , and r, to the algebraically closed field C, and show that such a system of algebraic equations is inconsistent.In particular, the numerator polynomial P (i, j, , m, k, r) = numer W(i, j, , m, k, r) belongs to C[i, j, , m, k, r]6 , has integer coefficients and vanishes whenever W does.In many ways it is this polynomial and the fact that it has finite order and integer coefficients that allows for the use of a computer algebra proof that holds for arbitrary Galerkin truncation.
The goal is to understand the common zeros of the collection of polynomials obtained by evaluating k on various subsets of the lattice.Particularly for a given subset K ⊆ Z 2 0 , we consider the ideal of polynomials in the remaining 9 variables (i, j, , m, r) ∈ C 9 generated by evaluating P at each k ∈ K I K := P k : k ∈ K ⊂ C[i, j, , m, r], where P k (i, j, , m, r) = P (i, j, , m, k, r).
We also introduce the two polynomials h 1 , h 2 describing the i + j + + m = 0 constraint as well as the polynomial7 g(i, j, , m, r whose non-vanishing implies that r = 0, i = 0, j = 0, = 0, m = 0 and (i + j, + m) = (0, 0), (i + , j + m) = (0, 0), (i + m, j + ) = (0, 0), and therefore {g = 0} perfectly encodes the constraint set C N along with the assumption that r = 0.The proof will be complete then, if we can find a set K ⊆ Z 2 0 so that the affine variety generated by I K , h 1 , h 2 is the same as that generated by g, namely In order to show this, it will also be useful to freeze i, j, , m, r and treat P as a polynomial in k = (k 1 , k 2 ) and regard the coefficients as polynomials in C[i, j, , m, r].We denote the collection of these polynomials by Note that the number of polynomials in {f 1 , . . ., f s } only depends on the order of the polynomial P in k and is independent of any truncation.
By Lemma B.1 and the observation that P is order 19 in k, if there ∃k ∈ Z 2 0 such that k ∈ Z 2 0 : |k − k | ∞ ≤ 10 ⊆ K, then the polynomial ideals are equal Since we can obviously find such a set K, our goal reduces to showing that By Theorem A.11, this is equivalent to showing that the saturated ideal f 1 , . . ., f s , h 1 , h 2 : or more practically from a computational stand point, by Theorem A.12 that {1} is the reduced Gröbner basis for the augmented ideal with z added as an extra variable.This can indeed be checked computationally in Maple.Specifically, we compute the reduced Gröbner basis of Ĩ using the graded reverse lexicographical monomial order (or "grevlex") and the variable ordering using an implementation of the F4 algorithm [22] (see Appendix A and C ); the computation verifies that G = {1}, thereby concluding the proof.

Treating the Galerkin truncation: Proof of proposition 5.1
As already mentioned, the Galerkin truncation makes D(i, k, r) = D k i a piecewise defined rational function (depending on the choice of k) and so great care must be taken to consider all the possible combinations algebraic forms on different partitions of the lattice to carry out a similar argument to the one given above.
Proof of proposition 5.1.To begin, it is convenient to write D k i in a proper piecewise defined sense.To do this we will find it convenient to define the set and denote as well as Then D(i, k, r) can be written piecewise as .
This means that for each fixed i ∈ Z 2 0,N , D(i, k, r) is obtained by evaluating exactly one of the four exact rational algebraic functions belonging to {D + , D, D − , 0}, moreover it is not hard to show that for each such i ∈ Z 2 0,N there is always a suitably large set K such that D(i, k, r) takes the same algebraic form for all k ∈ K. Based off this idea, it is the following key lemma that allows us to treat the piecewise rational behavior of the sum W(i, j, , m, k, r) := D(i, k, r) + D(j, k, r) + D( , k, r) + D(m, k, r) in a purely algebraic fashion, as long as N is large enough.
Before proving Lemma 5.3 (which is done in the following subsection), lets see how to use it to complete the proof of Proposition 5.1.Indeed, with this Lemma in hand, the proof now follows along the similar lines as that of Section 5.1 above.Fix (i, j, , m) ∈ C N and r = 0 with i + j + + m = 0 and assume by contradiction that W(i, j, , m, k, r) = 0 for all k ∈ Z 2 0,N .
By Lemma 5.3, let so that for all k ∈ K, W(i, j, , m, k, r) is a given fixed rational function and therefore we can define as in Section 5.1 the numerator polynomial P (i, j, , m, k, r) = numer (W(i, j, , m, k, r)) , and observe that by assumption we have (i, j, , m, r) ∈ V(I K , h 1 , h 2 ), where Therefore the proof is complete if we can show that V(I K , h 1 , h 2 ) = V(g), where g is defined by (5.1).By Lemma B.1 and Lemma 5.3, using that the P is at most order 19 in k, we can choose a = 10 so that 2a > 19, and therefore the polynomials {f 1 , . . ., f s } = coeffs(P, {k 1 , k 2 }) generate the ideal I K .We can now follow the exact same procedure as in Section 5.1 to show that the reduced Gröbner basis of the extended ideal Ĩ = f 1 , . . ., f s , h 1 , h 2 , zg − 1 is {1}.This can be verified computationally for general algebraic forms

Proof of Lemma 5.3
We now turn to the proof of the key Lemma 5.3.
Proof of Lemma 5.3.Fix an integer a > 0 and for each k ∈ Z 2 0,N −a , let K k := {k ∈ Z 2 0,N : |k − k | ∞ ≤ a} and define the following sets where D(i, k, r) takes a specific algebraic form ({D + , D − , D, 0} respectively) uniformly for k satisfying |k − k | ∞ ≤ a.This means we have a "good" set where for each fixed i ∈ G k , D(i, k, r) takes a consistent algebraic form for all |k − k | ∞ ≤ a, and a remaining "bad" set B k := Z 2 0,N \G k where D(i, k, r) doesn't take a consistent algebraic form for all |k − k | ∞ ≤ a (see Figure 1 for a illustration of these sets in the lattice).For each i ∈ Z 2 0,N , denote δ i the delta measure on Z 2 0,N concentrated at i, defined for each A ⊆ Z 2 0,N by Likewise for any four lattice points {i, j, , m} ⊆ Z 2 0,N , denote the counting measure γ i,j, ,m := δ i + δ j + δ + δ m , which counts how many of the lattice points {i, j, , m} belong to a given subset of the lattice.Note that for any A ⊆ Z 2 0,N , 0 ≤ γ i,j, ,m (A) ≤ 4. We now work by contradiction and assume that there exists four lattice points {i, j, , m} ∈ Z 2 0,N such that for every k ∈ Z 2 0,N with |k | ∞ ≤ N/2 − a, at least one of the lattice points {i, j, , m} belongs to B k .This implies that for each n ≥ 1 we have By the inclusion-exclusion principle (and the fact that the only non-trivial intersections are pairwise intersections), we have that for each {i, j, , m} ⊆ Z 2 0,N and M > 1 Choosing M = 9, then implies that which is clearly a contradiction, since γ i,j, ,m ≤ 4. Since we had to take M = 9, this means that we need to have 2(|k 9 | ∞ + a) < N, which is the same as requiring that N > 4(9a + 8).
To complete the proof of Lemma 5.3, we may assume with out loss of generality that not all {i, j, , m} belong to G k 0 , since if that were the case we could replace k with its horizontal reflection k = (−k 1 , k 2 ), and obtain {i, j, , m} 3 for a visual proof of this).
Then it is clear that Lemma 5.4 implies that there are rational functions {D 1 , D 2 , D 3 , D 4 } with each D i belonging to {D + , D − , D, 0} (excluding the case where they are all zero) such that = k reverses upper and lower corners from right to left.Note we have chosen k sufficiently large for this and also to ensure that K ±k and K ± k stay surrounded entirely by Ḡk and Ḡk respectively.
To computationally verify this, we need the concept of a Gröbner basis, which can in some way be considered an extension of a basis for the nullspace of a matrix in linear algebra.For this we need to fix a reasonable (total) ordering on monomials where α ∈ Z n ≥0 is a multi-index.A total ordering on monomials {x α : α ∈ Z n ≥0 } is naturally equivalent to a total ordering on the set of multi indices Z n ≥0 ; see e.g.[Definition 1, page 55 [16]].There are many reasonable choices for orderings on Z 2 ≥0 .A common choice of ordering is the lexicographic ordering where α > β if the leftmost non-zero entry in α − β is positive.However, many choices are possible and often preferable from a computational standpoint [Chapter 2, Section 2 of [16]].
A monomial ordering allows one to define the leading term LT(f ) of a polynomial f , defined to be largest term cx α in that polynomial.In a similar manner, for an ideal I, we can define LT(I) the set of leading terms of all non-zero elements of I.With a monomial ordering on hand, we can now define the Gröbner basis of an ideal.

Definition A.6 (Gröbner Basis). Fix a monomial order on
One can show that every ideal I ⊂ K[x 1 , . . .x n ] has a Gröbner basis and moreover that any Gröbner basis G = {g 1 , . . ., g k } satisfies g 1 , . . ., g k = I (see [Corollary 6; page 78 [16]] ).We need to put an additional constraint in order to single out a unique, minimal Gröbner basis.Definition A.7.Given a monomial ordering, a reduced Gröbner basis for a polynomial ideal I is a Gröbner basis G of an ideal I such that (i) The leading coefficient of p = 1 for all p ∈ G.
(ii) ∀p ∈ G no monomial of p lies in G \ {p} .
The following theorem summarizes now the sufficient condition we will use to prove inconsistency.
Theorem A.9 (Sufficient condition for inconsistency (See page 179, [16])).Let {f 1 , . . .f s } be a collection of polynomials in K[x 1 , . . ., x n ].If for a given monomial ordering, {1} is the reduced Gröbner basis of the ideal f 1 , ..., f s , then V(f 1 , . . ., f s ) = ∅ (i.e.{f 1 , . . .f s } are algebraically inconsistent) There exists many algorithms for computing the reduced Gröbner basis of a given ideal, we will use the Maple [1] implementation of the F4 algorithm [22].Computing a reduced Gröbner basis can be very computationally intensive hence it was important that we can take many steps to reduce the complexity of the system.

B A key lemma regarding polynomials on the lattice
This lemma is an important technical trick that is used repeatedly through out the paper to For example, we can deduce that if a polynomial vanishes at sufficiently many integer lattice points in some its variables, then the polynomials making up the coefficients of those variables must all vanish.
Lemma B.1.Let P (x 1 , ..., x n , y 1 , ..., y m ) be a polynomial m + n complex variables and suppose that it is degree J viewed as a polynomial in the last m variables.Suppose that K ⊂ Z m contains Z m 0,q with 2q + 2 > J or Z m 0 ∩ Z m q + i with 2q + 1 > J and some i ∈ Z m 0 .Define the coefficients of P viewed as a polynomial in the last m variables {f 1 , ..., f s } = coeffs(P, {y 1 , ..., y m }).
Then the following polynomial ideals are equivalent f 1 , ..., f s = P y : y ∈ K .
Proof.First, consider the case m = 1.The ideal generated by P y for y ∈ K is given by the following f (x) = y∈K h y (x)P y (x) = y∈K h y (x) 0≤j≤J f j (x)y j , and hence P y : y ∈ K ⊂ f 1 , ..., f s .To see the other direction, consider any polynomial in f ∈ f 1 , ...f s f (x) = 0≤j≤J h j (x)f j (x).
Note that since there are at least J + 1 distinct points in K we can write each coefficient f j (x) = y∈K c y P y (x) for some rational coefficients c y (by inverting the Vandermonde matrix encoding the equations P y (x) = 0≤j≤J y j f j (x) ∀y ∈ K).Therefore, we can write f (x) = 0≤j≤J h j (x)f j (x) = 0≤j≤J y∈K h j (x)c y P y (x), and therefore f 1 , ..., f s ⊆ P y : y ∈ K .
Consider next the case m > 1.As in the m = 1 case, it is straightforward to check that P y : y ∈ K ⊆ f 1 , ..., f s .The other inclusion can be proved analogously as well using the mdimensional Vandermonde matrix.One may alternatively see it using an iterative argument as in the statement regarding affine varieties.Indeed, for any (x 1 , ..., x n , y 1 , ..., y m ) consider the set of points y m+1 such that y ∈ K. Then we have the relationship P (x 1 , ..., x n , y 1 , ..., y m+1 ) = g J+1 (x 1 , ..., x n , y 1 , ..., y m )y J m+1 + ... + g 1 (x 1 , ..., x n , y 1 , ..., y m ), for suitable coefficient polynomials f 1 , ...f J+1 .These coefficient polynomials can be solved for in terms of P (x 1 , ..., x n , y 1 , ..., y m+1 ) by a usual Vandermonde matrix for the points y ∈ K By the assumption on the set K, this argument can similarly be iterated until all of the y j s have been eliminated from the coefficients, proving the statement about the polynomial ideals.Next, we define some fuctions that define the various coefficients as functions: The distorted lattice norm and skew product with distortion parameter The Euler coefficient: The algebraic expression for d: The saturation polynomial: We now generate the set of polynomials by calculating the numerator and the collecting the coefficients in k, k' Define the variable ordering and then run the Groebner basis algorithm with "grevlex" ordering on the collection of polynomials.

C Computer code
In this section we include printout of the code and output for various Maple worksheets needed to prove several key results using techniques from algebraic geometry.The worksheets can be provided upon request.Next, we define some fuctions that define the various coefficients as functions: The distorted lattice norm and skew product with distortion parameter The Euler coefficient The various algebrasic expressions that D can take The saturation polynomial Now we generate all possible unordered lists of size 4, sampled with replacement from the list of 4 possible functions removing the case.
By standard combinatorics (bars and stars counting), there should be 34 of these cases to consider.Indeed, we find that there are 34 total cases 34 With all these functions in hand, we now generate our set of all possible polynomials associated to and its various possible agebric forms by calculating the numerator.This generates a list of of polynomials in the variables for each case defined above Each polynomial is of degree at most 19 in k We then collect the coefficients in , which gives a collection of polynomials in .We add to this collection the linear polynomial and the saturating polynomial augmented with the extra variable z.This generates a list of 34 collections of polynomials.
We now define the variable ordering and run the Groebner basis algorithm with "grevlex" ordering on each collection of polynomials.It will produce a list of Groebner bases for each case.Note that this step takes a little time to run (about 12 minutes total depending on CPU) Since each Groebner basis is [1], this concludes the proof.QED

Proposition 3 . 11 .
Let {H k } := {H k : k ∈ Z 2 0,N } be the matrices defined by (3.3) in ∂ w k coordinates.Then the lifts X 0 , { ∂ a k , ∂ b k : k ∈ K} satisfy the uniform parabolic Hörmander condition on R), then by Proposition 3.7 it is clear that the complexified algebra of endomorphisms satisfies Lie({H k }) ⊗ C = sl(T w C Z 2 +,N ) ⊗ C and therefore

. 3 ) 4 . 6 .
Remark Note that the constraint set C N defined in (4.2) is fundamental to the symmetry of the sum D i + D j + D + D m .Each constraint is necessary in the sense that if any one of them fails then we automatically have D i + D j + D + D m = 0 due to the inversion symmetry D −i = −D i .

Lemma 5 . 3 .
Let a ≥ 1 and suppose that N > 4(9a + 8), then for each fixed (i, j, , m) ∈ C N , there exists a k ∈ Z 2 0,N −a and four rational functions {D 1 , D 2 , D 3 , D 4 }, each taking one of the four possible forms {D + , D, D − , 0}, with at least one the D i = 0 such that for all k ∈ Z 2 0 with |k − k | ∞ ≤ a, W takes the form

. 2 )
by computing the Gröbner basis of Ĩ for all possible choices of algebraic functions (D 1 , D 2 , D 3 , D 4 ) sampled from the set {D + , D, D − , 0} (excluding (0, 0, 0, 0)).Due to symmetry of the sum (5.2) and the constraint i + j + + m = 0, up to relabeling i, j, , m we can disregard the order in which we consider {D 1 , D 2 , D 3 , D 4 } and therefore the total number of possible ideals we have to check is just the number of ways to draw an unordered sample {D 1 , D 2 , D 3 , D 4 } of 4 things from a the set {D + , D, D − , 0} with replacement (excluding {0, 0, 0, 0}).This means that there are 4 to check.This seemingly tedious task is carried effortlessly by Maple (see Appendix C.3 for the code) showing that the reduced Gröbner basis for Ĩ in every case is {1}.This concludes the proof of Proposition 5.1 (and hence of Theorem 1.1).

B 3 B 2 B 1 Figure 2 :
Figure 2: An illustration of the empty triple intersection B 1 ∩ B 2 ∩ B 3 = ∅.The gaps between sets are exaggerated for visual clarity.
First we begin by loading the Groebner package in Maple

C. 1 "
Spanning" Proof of Proposition 4.11 by loading the Groebner package in Maple Next, we define some fuctions that define the various coefficients as functions: The distorted lattice norm and skew product with distortion parameter The Euler coefficient The algebraic expression for D The saturation polynomial With these functions in hand, we now generate our set of polynomials associated to by calculating the numerator and then collecting the coefficients in k 19 We define the variable ordering and then run the Groebner basis algorithm with "grevlex" ordering on the collection of polynomials with the extra condition and the saturating polynomial with the extra variable zSince the Groebner basis is[1], this concludes the proof.QEDC.2 "Infinite Dimensional" Distinctness ProofFirst we begin by loading the Groebner package in Maple

C. 3 Full
Moreover, since {H k } are constant matrices, this equality is uniform in w.It follows by Corollary 3.8 that X 0 , { ∂ a k , ∂ b k : k ∈ K} satisfy the uniform parabolic Hörmander condition on SC Z 2 +,N .Remark 3.12.It is important to note that each matrix H k has a banded structure, with nonzero entries occurring on the band − j = k (except when c j,k = 0).This banded structure is a consequence of the non-local frequency coupling present in non-linearity.Such non-local interactions provide a significant challenge when studying the Lie({H k }) and are what make projective spanning for Navier-Stokes and other PDEs so challenging compared to locally coupled models like Lorenz 96 or shell models like GOY or SABRA.