1 Introduction

Parallel structures are of the great interest in the classical Riemannian geometry (see [3, 6, 17]) as well as in affine differential geometry [4, 8, 9, 12,13,14,15]. Higher order parallel structures are the natural generalization of parallel structures and are widely studied as well [6, 7, 23,24,25].

In [2] O. Baues and V. Cortés studied affine hypersurfaces equipped with an almost complex structure.

They showed that there is direct relation between simply connected special Kähler manifolds [10] and improper affine hyperspheres. Later V. Cortés together with M.-A. Lawn and L. Schäfer proved a similar result for special para-Kähler manifolds [5]. In both cases an important role was played by the Kählerian (resp. para-Kählerian) symplectic form \(\omega \). The concept of special affine hyperspheres was generalized by the first author in [21]. Some other results related to affine hypersurfaces with almost complex structures can be also found in the paper of M. Kon [16]. In all the above cases the important role was played by an (almost) symplectic structure related in some way to the induced affine structure on a hypersurface. In particular, relation between an almost symplectic structure \(\omega \) and the induced affine connection \(\nabla \) and its curvature R seemed crucial.

The above results motivated the first author to study non-degenerate affine hypersurfaces \(f:M\rightarrow \mathbb {R}^{2n+1}\) with a transversal vector field \(\xi \) additionally equipped with an almost symplectic structure \(\omega \) in a more general setting. More precisely affine hypersurfaces with the property \(\nabla ^p\omega =0\) or even more general \(R^p\omega =0\). In [19] it was shown that if \(\dim M\ge 4\) condition \(R\omega =0\) implies that \(\nabla \) must be flat (what is generalization of result obtained in [16]) and the result generalizes to an arbitrary power of R under additional assumption that the second fundamental form is positive definite and the transversal vector field \(\xi \) is locally equiaffine (i.e. \(d\tau =0\)). Namely, we have

Theorem 1.1

([19]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) be a non-degenerate affine hypersurface (\(\dim M\ge 4\)) with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). Additionally assume that the second fundamental form is positive definite on M. If \(R^p\omega =0\) for some positive integer p then \(\nabla \) is flat.

From the above theorem it follows that

Theorem 1.2

([19]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) be a non-degenerate affine hypersurface (\(\dim M\ge 4\)) with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). Additionally assume that the second fundamental form is positive definite on M. If \(\nabla ^p\omega =0\) for some positive integer p then \(\nabla \) is flat.

In the very same paper it was shown that the condition that the second fundamental form is positive definite cannot be relaxed since there exists affine hypersurface with property \(R^2\omega =0\) which is not flat.

Later in [20] it was shown that for affine hypersurfaces with Lorentzian second fundamental form we still have strong constrains on the shape operator S if only \(\dim M\ge 6\). More precisely, although it cannot be shown that \(S=0\) (what is equivalent with \(\nabla \) being flat) one may still show that \({\text {rank}}S\le 1\). Recently ([22]) it was shown that the same constrains apply when \(\dim M = 4\). Combining results from [20, 22] we have the following theorems:

Theorem 1.3

([20, 22]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(R^p\omega =0\) for some \(p\ge 1\) and the second fundamental form is Lorentzian on M (that is has signature \((2n-1,1)\)) then the shape operator S has the rank \(\le 1\).

Theorem 1.4

([20, 22]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(\nabla ^p\omega =0\) for some \(p\ge 1\) and the second fundamental form is Lorentzian on M (that is has signature \((2n-1,1)\)) then the shape operator S has the rank \(\le 1\).

The main purpose of the present paper is to prove that the condition that the second fundamental form is Lorentzian can be dropped and Theorems 1.3 and 1.4 stay true for an arbitrary non-degenerate second fundamental form. The main difficulty of this generalization comes from the fact that due to exponential grow of different possible scenarios methods from [20,21,22] cannot be easily repeated. For this reasons we need to change approach and first focus more on particular types of Jordan blocks rather than on all possible configurations. Reducing number of allowed Jordan blocks dramatically decrease number of different configurations we need to consider when proving main theorems of this paper.

The paper is organized as follows. In Sect. 2 we briefly recall the basic formulas of affine differential geometry, Jordan decomposition and some basic definitions from symplectic geometry. In this section we also prove a simple but important lemma about simultaneous decomposition of the shape operator S and the second fundamental form h. The Sect. 3 is devoted to real Jordan blocks. The main result of this section is that the Jordan decomposition of the shape operator cannot contain real Jordan blocks of dimension grater than or equal 3 and it contains at most one block of dimension 2. In Sect. 4 we study complex Jordan blocks. It is shown that condition \(R^p\omega =0\) implies that the Jordan decomposition of the shape operator cannot contain complex Jordan blocks. Section 5 contains the main results of this paper. Basing on results from Sects. 3 and 4, we show that if there exists an almost symplectic structure \(\omega \) satisfying condition \(R^p\omega =0\) or \(\nabla ^p\omega =0\) for some positive integer p then the rank of the shape operator must be \(\le 1\). We conclude the section with some general example.

2 Preliminaries

We briefly recall the basic formulas of affine differential geometry. For more details, we refer to [18]. Let \(f:M\rightarrow \mathbb {R}^{n+1}\) be an orientable connected differentiable n-dimensional hypersurface immersed in the affine space \(\mathbb {R}^{n+1}\) equipped with its usual flat connection \({\text {D}}\). Then for any transversal vector field \(\xi \) we have

$$\begin{aligned} {\text {D}}_Xf_*Y=f_*(\nabla _XY)+h(X,Y)\xi \end{aligned}$$
(1)

and

$$\begin{aligned} {\text {D}}_X\xi =-f_*(SX)+\tau (X)\xi , \end{aligned}$$
(2)

where XY are vector fields tangent to M. It is known that \(\nabla \) is a torsion-free connection, h is a symmetric bilinear form on M, called the second fundamental form, S is a tensor of type (1, 1), called the shape operator, and \(\tau \) is a 1-form, called the transversal connection form. The vector field \(\xi \) is called equiaffine if \(\tau =0\). When \(d\tau =0\) the vector field \(\xi \) is called locally equiaffine.

When h is non-degenerate then h defines a pseudo-Riemannian metric on M. In this case we say that the hypersurface or the hypersurface immersion is non-degenerate. In this paper we always assume that f is non-degenerate. We have the following

Theorem 2.1

([18], Fundamental equations). For an arbitrary transversal vector field \(\xi \) the induced connection \(\nabla \), the second fundamental form h, the shape operator S, and the 1-form \(\tau \) satisfy the following equations:

$$\begin{aligned}&R(X,Y)Z=h(Y,Z)SX-h(X,Z)SY, \end{aligned}$$
(3)
$$\begin{aligned}&(\nabla _X h)(Y,Z)+\tau (X)h(Y,Z)=(\nabla _Y h)(X,Z)+\tau (Y)h(X,Z), \end{aligned}$$
(4)
$$\begin{aligned}&(\nabla _X S)(Y)-\tau (X)SY=(\nabla _Y S)(X)-\tau (Y)SX, \end{aligned}$$
(5)
$$\begin{aligned}&h(X,SY)-h(SX,Y)=2d\tau (X,Y). \end{aligned}$$
(6)

The Eqs. (3), (4), (5), and (6) are called the equations of Gauss, Codazzi for h, Codazzi for S and Ricci, respectively.

Let \(\omega \) be a non-degenerate 2-form on manifold M. The form \(\omega \) we call an almost symplectic structure. It is easy to see that if a manifold M admits some almost symplectic structure then M is orientable manifold of even dimension. Structure \(\omega \) is called a symplectic structure, if it is almost symplectic and additionally satisfies \(d\omega =0\). Pair \((M,\omega )\) we call a (an almost) symplectic manifold, if \(\omega \) is a (an almost) symplectic structure on M.

Recall ([1]) that an affine connection \(\nabla \) on an almost symplectic manifold \((M,\omega )\) we call an almost symplectic connection if \(\nabla \omega =0\). An affine connection \(\nabla \) on an almost symplectic manifold \((M,\omega )\) we call a symplectic connection if it is almost symplectic and torsion-free.

Now we recall a well-know theorem about Jordan normal form (See eg. Th. A.2.6 in [11]).

Theorem 2.2

([11], Jordan). If \(A:V\rightarrow V\) is an endomorphism of real finite dimensional vector space V then there exists a basis of V such that the matrix of the endomorphism A in this basis has a form

$$\begin{aligned} \left[ \begin{matrix} L_1 &{}\quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad L_2 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad \ldots &{} \quad L_s \end{matrix}\right] , \end{aligned}$$
(7)

where \(L_i\) is the Jordan block corresponding to the eigenvalue \(\lambda _i\) and given by the formula

$$\begin{aligned} \left[ \begin{matrix} \lambda _i &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad \lambda _i &{}\quad 0 &{}\quad \ldots &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 1 &{} \quad \lambda _i &{}\quad \ldots &{} \quad 0 &{} \quad 0\\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \ldots &{} \ldots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad \lambda _i &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 1 &{} \quad \lambda _i \end{matrix}\right] \in M(k_i,k_i,\mathbb {R}), \end{aligned}$$
(8)

when \(\lambda _i\) is a real number, or by the formula

$$\begin{aligned} \left[ \begin{matrix} B_i &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0 \\ I &{} \quad B_i &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ 0 &{} \quad I &{} \quad B_i &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad B_i &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad I &{} \quad B_i \end{matrix}\right] \in M(2k_i,2k_i,\mathbb {R}), \end{aligned}$$
(9)

where

$$\begin{aligned} B_i= \left[ \begin{matrix} \alpha _i &{} \quad \beta _i\\ -\beta _i &{} \quad \alpha _i \end{matrix} \right] ,\qquad I= \left[ \begin{matrix} 1 &{} \quad 0\\ 0 &{}\quad 1 \end{matrix} \right] \in M(2,2,\mathbb {R}), \end{aligned}$$

when \(\lambda _i=\alpha _i+i\beta _i\) (\(\beta _i\ne 0\)) is a complex number.

A square matrix P of dimension n is called the sip matrix (the standard involutory permutation) [11] if it has a form:

$$\begin{aligned} \left[ \begin{matrix} 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 1 \\ 0 &{} \quad 0 &{} \quad \cdots &{} \quad 1 &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \cdots &{} \quad \cdots \\ 0 &{} \quad 1 &{} \quad \cdots &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 0 \end{matrix}\right] . \end{aligned}$$
(10)

Note that P is non-singular symmetric matrix and \(P^2=I\). In particular all its eigenvalues are equal \(\pm 1\). Moreover, it is easy to verify that we have the following formula for signature of P:

$$\begin{aligned} {\text {sig}}P= {\left\{ \begin{array}{ll} (\frac{n}{2},\frac{n}{2}), &{} \text {if { n} is even} \\ (\frac{n+1}{2},\frac{n-1}{2}), &{} \text {if { n} is odd.} \end{array}\right. } \end{aligned}$$
(11)

Theorem 2.3

(Th. 6.1.5 [11]). Let H be a real invertible and symmetric matrix of dimension n. Then for every square n dimensional and H-selfadjoint matrix A (i.e. \(A^{T}H=HA\)) there exists a basis \(\{e_1,\ldots ,e_n\}\) such that

$$\begin{aligned} A=J_1\oplus \cdots \oplus J_t\oplus J_{t+1}\oplus \cdots \oplus J_{t+s}, \end{aligned}$$
(12)

where \(J_1,\ldots ,J_t\) are Jordan blocks of type (8) and \(J_{t+1},\ldots ,J_{t+s}\) are Jordan blocks of type (9). Moreover

$$\begin{aligned} H=\varepsilon _1 P_1\oplus \cdots \oplus \varepsilon _t P_t\oplus P_{t+1}\oplus \cdots \oplus P_{t+s}, \end{aligned}$$
(13)

where \(P_j\) is a sip matrix of dimension equal to dimension of matrix \(J_j\) for \(j=1,\ldots ,t+s\) and \(\varepsilon _j=\pm 1\) for \(j=1,\ldots ,t\). The signs \(\varepsilon _j\) are determined uniquely by (AH) up to permutation of signs in the blocks of (13) corresponding to the Jordan blocks of A with the same real eigenvalue and the same size.

For a tensor field T of type (0, p) its covariant derivation \(\nabla T\) is a tensor field of type \((0,p+1)\) given by the formula:

$$\begin{aligned}&(\nabla T)(X_1,X_2,\ldots ,X_{p+1}):=X_1(T(X_2,\ldots ,X_{p+1}))\\&\qquad -\sum _{i=2}^{p+1}T(X_2,\ldots ,\nabla _{X_1}X_i,\ldots ,X_{p+1}). \end{aligned}$$

Higher order covariant derivatives of T can be defined by recursion:

$$\begin{aligned} (\nabla ^{k+1} T)=\nabla (\nabla ^kT). \end{aligned}$$

To simplify computation it is often convenient to define \(\nabla ^0T:= T\).

If R is a curvature tensor for an affine connection \(\nabla \), one can define a new tensor \(R\cdot T\) of type \((0,p+2)\) by the formula

$$\begin{aligned} (R\cdot T)(X_1,X_2,\ldots ,X_{p+2}):= -\sum _{i=3}^{p+2}T(X_3,\ldots ,R(X_1,X_2)X_i,\ldots ,X_{p+2}). \end{aligned}$$

Analogously to the previous case, we may define a tensor \(R^k\cdot T\) of type \((0,2k+p)\) using the following recursive formula:

$$\begin{aligned} R^k\cdot T=R\cdot (R^{k-1}\cdot T) \end{aligned}$$

and additionally \(R^0\cdot T:=T\).

In order to simplify the notation, we will be often omitting “\(\cdot \)” in \(R^k\cdot T\) when no confusion arises. Thus we will be writing often \(R^k T\) instead of \(R^k\cdot T\).

We conclude this section with the following lemma:

Lemma 2.4

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \). Then for every point \(x\in M\) there exists a basis \({e_1,\ldots ,e_{2n}}\) of \(T_xM\) such that the shape operator S and the second fundamental form h can be expressed in this basis in the block matrix form

$$\begin{aligned} S=\left[ \begin{matrix} S_1 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad S_2 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad \ldots &{} \quad S_{q+r} \end{matrix}\right] , h=\left[ \begin{matrix} H_1 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad H_2 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad \ldots &{} \quad H_{q+r} \end{matrix}\right] , \end{aligned}$$
(14)

and \(S_i\), \(H_i\) satisfy the following conditions:

  • For \(i=1,\ldots ,q+r\) \(\dim S_i=\dim H_i\).

  • For \(i=1,\ldots ,q\) \(S_i\) is a Jordan block of type (8) and \(\dim S_i\ge \dim S_{i+1}\) for \(i=1,\ldots ,q-1\).

  • For \(i=q+1,\ldots ,q+r\) \(S_i\) is a Jordan block of type (9) and \(\dim S_i\ge \dim S_{i+1}\) for \(i=q+1,\ldots ,q+r-1\).

  • For \(i=1,\ldots ,q\) \(H_i\) is up to a sign sip matrix.

  • For \(i=q+1,\ldots ,q+r\) \(H_i\) is a sip matrix.

Proof

Since \(\xi \) is locally equiaffine we have \(d\tau =0\) and in consequence \(h(SX,Y)=h(X,SY)\) for all \(X,Y\in T_xM\). Now thesis immediately follows from Theorem 2.3 and the fact that we can rearrange Jordan blocks \(S_i\) and matrices \(H_i\) in desired order (rearranging vectors \({e_1,\ldots ,e_{2n}}\) if needed). \(\square \)

3 Real Jordan Blocks

In this chapter we study properties of real Jordan blocks of the shape operator S.

In all the below lemmas we assume that \(f:M\rightarrow \mathbb {R}^{2n+1}\) is a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). About objects \(\nabla \), h, S and \(\tau \) we assume that they are induced by \(\xi \).

In the following lemmas (if not stated otherwise) we assume that \(S_1\) (from Lemma 2.4) is a k-dimensional block of the form

$$\begin{aligned} S_1=\left[ \begin{matrix} \alpha &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad \alpha &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 1 &{} \quad \alpha &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad \alpha &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 1 &{} \quad \alpha \end{matrix}\right] \in M(k,k,\mathbb {R}), \end{aligned}$$
(15)

where \(\alpha \in \mathbb {R}\) and

$$\begin{aligned} H_1=\left[ \begin{matrix} 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad \varepsilon \\ 0 &{} \quad 0 &{} \quad \cdots &{} \quad \varepsilon &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \cdots &{} \quad \cdots \\ 0 &{} \quad \varepsilon &{} \quad \cdots &{} \quad 0 &{} \quad 0 \\ \varepsilon &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 0 \end{matrix}\right] \in M(k,k,\mathbb {R}), \end{aligned}$$

where \(\varepsilon \in \{-1,1\}\). By \(\{e_1,\ldots ,e_{2n}\}\) we will be denoting basis of \(T_xM\) such that \(\{e_1,\ldots ,e_{k}\}\) is a basis for \(S_1\).

Lemma 3.1

Let \(p\ge 1\). If \(k\ge 2\) and \(X_1,\ldots ,X_p\in {\text {span}} \{e_1,\ldots ,e_k\}=:V\) then for every \(i\in \{2,3,\ldots ,2n\}\)

$$\begin{aligned} R^p\omega (\underbrace{X_1,e_k,X_2,e_k,\ldots ,X_p,e_k}_{2p},e_i,e_k)=\pi (X_1)\cdot \ldots \pi (X_p)\varepsilon ^p\alpha ^p\omega (e_i,e_k), \end{aligned}$$
(16)

where \(\pi :V\rightarrow \mathbb {R}\) is a projection defined as follows:

$$\begin{aligned} \pi (X)=\lambda _1 \end{aligned}$$

if \(X=\lambda _1e_1+\ldots +\lambda _ke_k\in V\) for some \(\lambda _1,\ldots ,\lambda _k\in \mathbb {R}\).

Proof

First we shall prove the following formulas:

$$\begin{aligned} R(X,e_k)e_k&=-\pi (X)\varepsilon \alpha e_k, \end{aligned}$$
(17)
$$\begin{aligned} R(X,e_k)e_i&=-h(X,e_i)\alpha e_k, \end{aligned}$$
(18)
$$\begin{aligned} \pi (R(X,e_k)Y)&=\varepsilon \alpha \pi (X)\pi (Y), \end{aligned}$$
(19)

for all \(X,Y \in V\). To prove (17) we compute

$$\begin{aligned} R(X,e_k)e_k&=\underbrace{h(e_k,e_k)}_{0}S_1X-h(X,e_k)S_1e_k=-h(\alpha _1e_1+\ldots +\alpha _ke_k,e_k)\alpha e_k\\&=-\alpha _1\underbrace{h(e_1,e_k)}_{\varepsilon }\alpha e_k=-\pi (X)\varepsilon \alpha e_k. \end{aligned}$$

To prove (18) we compute

$$\begin{aligned} R(X,e_k)e_i&=\underbrace{h(e_k,e_i)}_{0}S_1X-h(X,e_i)S_1e_k=-h(X,e_i)\alpha e_k. \end{aligned}$$

To prove (19) we compute

$$\begin{aligned} \pi (R(X,e_k)Y)&=\pi (h(e_k,Y)S_1X-h(X,Y)S_1e_k)\\&=h(e_k,Y)\pi (S_1X)-h(X,Y)\underbrace{\pi (S_1e_k)}_{0}\\&=\varepsilon \pi (Y)\pi (X)\alpha . \end{aligned}$$

Now using formulas (17), (18), (19) we shall prove the thesis of the lemma. For \(p=1\) we have

$$\begin{aligned} (R\omega )(X_1,e_k,e_i,e_k)&=-\omega (R(X_1,e_k)e_i,e_k)-\omega (e_i,R(X_1,e_k)e_k)\\&=\underbrace{\omega (h(X_1,e_i)\alpha e_k,e_k)}_{0}+\omega (e_i,\pi (X_1)\varepsilon \alpha e_k)\\&=\pi (X_1)\varepsilon \alpha \omega (e_i,e_k). \end{aligned}$$

Assume that formula (16) is true for some \(p\ge 1\). Then for \(p+1\) we get

$$\begin{aligned} R^{p+1}\omega&(\underbrace{X_1,e_k,\ldots ,X_{p+1},e_k}_{2p+2},e_i,e_k)\\&=-R^p\omega (R(X_1,e_k)X_2,e_k,\ldots )\\&\quad -R^p\omega (X_2,R(X_1,e_k)e_k,\ldots )\\&\quad -R^p\omega (X_2,e_k,R(X_1,e_k)X_3,\ldots )\\&\quad -R^p\omega (X_2,e_k,X_3,R(X_1,e_k)e_k,\ldots )\\&\quad \cdots \\&\quad -R^p\omega (X_2,e_k\ldots ,R(X_1,e_k)e_i,e_k)\\&\quad -R^p\omega (X_2,e_k,\ldots ,e_i,R(X_1,e_k)e_k)\\&=-\pi ((R(X_1,e_k)X_2)\pi (X_3)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad +R^p\omega (X_2,\pi (X_1)\varepsilon \alpha e_k,X_3,\ldots )\\&\quad -\pi (X_2)\pi ((R(X_1,e_k)X_3)\pi (X_4)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad +R^p\omega (X_2,e_k,X_3,\pi (X_1)\varepsilon \alpha e_k,\ldots )\\&\quad \cdots \\&\quad -\pi (X_2)\ldots \pi (X_{p+1})\underbrace{\pi (R(X_1,e_k)e_i)}_{0}\varepsilon ^p\alpha ^p \omega (e_i,e_k)\\&\quad +R^p\omega (X_2,e_k,\ldots ,e_i,\pi (X_1)\varepsilon \alpha e_k)\\&=-\varepsilon \alpha \pi (X_1)\pi (X_2)\pi (X_3)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad +\varepsilon \alpha \pi (X_1)\pi (X_2)\pi (X_3)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad -\varepsilon \alpha \pi (X_1)\pi (X_2)\pi (X_3)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad +\varepsilon \alpha \pi (X_1)\pi (X_2)\pi (X_3)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&\quad \cdots \\&\quad -0\\&\quad +\varepsilon \alpha \pi (X_1) \pi (X_2)\ldots \pi (X_{p+1})\varepsilon ^p\alpha ^p\omega (e_i,e_k)\\&=\varepsilon ^{p+1}\alpha ^{p+1}\pi (X_1)\pi (X_2)\ldots \pi (X_{p+1})\omega (e_i,e_k). \end{aligned}$$

Now, by the induction principle the formula (16) holds for every \(p\ge 1\). \(\square \)

As an immediate consequence of Lemma 3.1 (setting \(X_1=X_2=\ldots =X_p=e_1\)) we obtain:

Corollary 3.2

If \(k\ge 2\) then for every \(i\in \{2,\ldots ,2n\}\) we have

$$\begin{aligned} R^p\omega (e_1,e_k,\ldots ,e_1,e_k,e_i,e_k)=\varepsilon ^p\alpha ^p\omega (e_i,e_k). \end{aligned}$$

In the next few lemmas we shall obtain some properties of \(R^p\omega \) under the assumption that \(k>3\).

Lemma 3.3

If \(k>3\) then

$$\begin{aligned} R(e_1,e_{k-1})e_1&=R(e_1,e_{k-1})e_{k-1}=0 \end{aligned}$$
(20)
$$\begin{aligned} R(e_1,e_{k-1})e_2&=\varepsilon S_1e_1=\varepsilon \alpha e_1+\varepsilon e_2 \end{aligned}$$
(21)
$$\begin{aligned} R(e_1,e_{k-1})e_k&=-\varepsilon S_1e_{k-1}=-\varepsilon \alpha e_{k-1}-\varepsilon e_k \end{aligned}$$
(22)
$$\begin{aligned} R(e_{k-1},e_{k})e_1&=\varepsilon S_1e_{k-1}=\varepsilon \alpha e_{k-1}+\varepsilon e_k \end{aligned}$$
(23)
$$\begin{aligned} R(e_{k-1},e_{k})e_2&=-\varepsilon S_1e_{k}=-\varepsilon \alpha e_{k} \end{aligned}$$
(24)
$$\begin{aligned} R(e_{k-1},e_{k})e_{k-1}&=R(e_{k-1},e_{k})e_{k}=0 \end{aligned}$$
(25)

Proof

The proof is an immediate consequence of the Gauss equation and the fact that

$$\begin{aligned} h(e_1,e_k)=h(e_2,e_{k-1})=\varepsilon \end{aligned}$$

and

$$\begin{aligned} h(e_1,e_1)&=h(e_1,e_2)=h(e_1,e_{k-1})=h(e_{k-1},e_{k-1})\\&=h(e_{k},e_{k-1})=h(e_{k},e_{k})=h(e_{2},e_{k})=0, \end{aligned}$$

if only \(k>3\). \(\square \)

Lemma 3.4

If \(k>3\) and \(p\ge 1\) we have

$$\begin{aligned} R^p\omega (e_1,e_{k-1},e_1,e_{k-1},\ldots ,e_1,e_{k-1})&=0, \end{aligned}$$
(26)
$$\begin{aligned} R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})&= (-1)^p\varepsilon ^p(\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1})), \end{aligned}$$
(27)
$$\begin{aligned} R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_i,e_k)&=\varepsilon ^p(\alpha \omega (e_i,e_{k-1})+\omega (e_i,e_k))\nonumber \\&\text {for}\quad i\in \{1,\ldots ,2n\}\setminus \{2,k\}. \end{aligned}$$
(28)

Proof

First note that the formula (26) easily follows from (20) for every \(p\ge 1\).

In order to prove (27) let us note that for \(p=1\) we have

$$\begin{aligned}&R\omega (e_1,e_{k-1},e_2,e_{k-1}) \\&\quad =-\omega (R(e_1,e_{k-1})e_2,e_{k-1}))-\omega (e_2,R(e_1,e_{k-1})e_{k-1})\\&\quad =-\omega (\varepsilon \alpha e_1+\varepsilon e_2,e_{k-1})=-\varepsilon \alpha \omega (e_1,e_{k-1})-\varepsilon \omega (e_2,e_{k-1}) \end{aligned}$$

thanks to (20) and (21). Now assume that (27) is true for some \(p\ge 1\). Using (20), (21) and (26) we compute

$$\begin{aligned} R^{p+1}\omega&(\underbrace{e_1,e_{k-1},e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p+2},e_2,e_{k-1})\\&=(R(e_1,e_{k-1})\cdot R^p\omega )(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_2,e_{k-1})\\&=-R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_1,e_{k-1})e_2,e_{k-1})\\&=-R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},\varepsilon S_1e_1,e_{k-1})\\&=-\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},\alpha e_1+e_2,e_{k-1})\\&=-\varepsilon \alpha \underbrace{R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_{k-1})}_{0}\\&\quad -\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})\\&=-\varepsilon (-1)^p\varepsilon ^p(\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1}))\\&=(-1)^{p+1}\varepsilon ^{p+1}(\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1})). \end{aligned}$$

Thus, by the induction principle (27) is true for all \(p\ge 1\).

In order to prove (28) first note that if \(e_i\bot \{e_1,e_{k-1}\}\) we have

$$\begin{aligned} R(e_1,e_{k-1})e_i=h(e_{k-1},e_i)S_1e_1-h(e_1,e_i)S_1e_{k-1}=0. \end{aligned}$$
(29)

In particular the above holds for all \(i\in \{1,\ldots ,2n\}\setminus \{2,k\}\). Using (22) and (29) we get that for every \(i\in \{1,\ldots ,2n\}\setminus \{2,k\}\)

$$\begin{aligned} R\omega (e_1,e_{k-1},e_i,e_k)&=-\omega (\underbrace{R(e_1,e_{k-1})e_i}_{0},e_k)-\omega (e_i,R(e_1,e_{k-1})e_k)\\&=\varepsilon (\alpha \omega (e_i,e_{k-1})+\omega (e_i,e_k)) \end{aligned}$$

thus (28) is true for \(p=1\). Now assume that (28) holds for some \(p\ge 1\) and all \(i\in \{1,\ldots ,2n\}\setminus \{2,k\}\). We have

$$\begin{aligned} R^{p+1}\omega&(\underbrace{e_1,e_{k-1},e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p+2},e_i,e_k)\\&=(R(e_1,e_{k-1})\cdot R^p\omega )(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_i,e_k)\\&=-R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_i,R(e_1,e_{k-1})e_k)\\&=\varepsilon \alpha R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_i,e_{k-1})\\&\quad +\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_i,e_k)\\&=\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_i,e_k)\\&=\varepsilon ^{p+1}(\alpha \omega (e_i,e_{k-1})+\omega (e_i,e_k)) \end{aligned}$$

since

$$\begin{aligned} R(e_1,e_{k-1})e_1=R(e_1,e_{k-1})e_{k-1}=R(e_1,e_{k-1})e_i=0 \end{aligned}$$

by (20) and (29). Now, by the induction principle (28) holds for all \(p\ge 1\). \(\square \)

Lemma 3.5

If \(k>3\) and \(p\ge 0\) we have

$$\begin{aligned}&R^{2p+1}\omega (\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+2},e_2,e_k)=-\varepsilon \alpha (\omega (e_1,e_k)-\omega (e_2,e_{k-1})), \end{aligned}$$
(30)
$$\begin{aligned}&R^{2p+2}\omega (\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+4},e_2,e_k)\nonumber \\&\quad =-\alpha (2\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1})+\omega (e_1,e_{k})). \end{aligned}$$
(31)

Proof

First note that by straightforward computations, using (20), (21) and (22), one may easily check that (30) and (31) are true for \(p=0\).

Let us assume that \(p>0\). In order to prove (30), using Lemma 3.3, we compute

$$\begin{aligned} R^{2p+1}\omega&(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+2},e_2,e_k)\\&=(R(e_1,e_{k-1})\cdot R^{2p}\omega )(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p},e_2,e_k)\\&=-R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_1,e_{k-1})e_2,e_k)\\&\quad -R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,R(e_1,e_{k-1})e_k)\\&=-\varepsilon R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},\alpha e_1+e_2,e_k)\\&\quad -\varepsilon R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,-\alpha e_{k-1}-e_k)\\&=-\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&\quad -\varepsilon R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad +\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})\\&\quad +\varepsilon R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&=-\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&\quad +\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1}). \end{aligned}$$

Now using (28) (for \(i=1\)) and (27) we obtain

$$\begin{aligned} R^{2p+1}\omega&(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+2},e_2,e_k)\\&=-\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&\quad +\varepsilon \alpha R^{2p}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})\\&=-\varepsilon \alpha \varepsilon ^{2p}(\alpha \omega (e_1,e_{k-1})+\omega (e_1,e_k)) \\&\quad +\varepsilon \alpha \cdot (-1)^{2p}\varepsilon ^{2p} (\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1}))\\&=-\varepsilon \alpha (\omega (e_1,e_k)-\omega (e_2,e_{k-1})) \end{aligned}$$

that is (30) is true for all \(p\ge 0\).

In order to prove (31) we compute

$$\begin{aligned} R^{2p+2}\omega&(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+4},e_2,e_k) \\&= -R^{2p+1}\omega (R(e_1,e_{k-1})e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad -R^{2p+1}\omega (e_1,R(e_1,e_{k-1})e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad \cdots \\&\quad -R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_1,e_{k-1})e_2,e_k)\\&\quad -R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,R(e_1,e_{k-1})e_k)\\&= -R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_1,e_{k-1})e_2,e_k)\\&\quad -R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,R(e_1,e_{k-1})e_k) \end{aligned}$$

where the last equality follows from (20). Now using (21) and (22) we get

$$\begin{aligned} R^{2p+2}\omega&(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+4},e_2,e_k) \\&=-\varepsilon \alpha R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&\quad -\varepsilon R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad +\varepsilon \alpha R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})\\&\quad +\varepsilon R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&=-\varepsilon \alpha R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&\quad +\varepsilon \alpha R^{2p+1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1}). \end{aligned}$$

Again using (28) (for \(i=1\)) and (27) we obtain

$$\begin{aligned} R^{2p+2}\omega&(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{4p+4},e_2,e_k) \\&= -\varepsilon \alpha \varepsilon ^{2p+1}(\alpha \omega (e_1,e_{k-1})+\omega (e_1,e_k))\\&\quad +\varepsilon \alpha \cdot (-1)^{2p+1}\varepsilon ^{2p+1}(\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1}))\\&=-\alpha (2\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1})+\omega (e_1,e_{k})), \end{aligned}$$

what completes the proof of (31). \(\square \)

Lemma 3.6

If \(k>3\) and \(p\ge 1\) we have

$$\begin{aligned} R^p\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p-2},e_1,e_2) \nonumber \\&= \varepsilon ^p\alpha (\omega (e_1,e_k)+\omega (e_2,e_{k-1}))+\varepsilon ^p\omega (e_2,e_k) \end{aligned}$$
(32)

Proof

For \(p=1\), using (23) and (24), we directly check that

$$\begin{aligned} R\omega (e_{k-1},e_k,e_1,e_2)=\varepsilon \alpha (\omega (e_1,e_k)+\omega (e_2,e_{k-1}))+\varepsilon \omega (e_2,e_k). \end{aligned}$$

Now assume that (32) is true for some \(p\ge 1\). First we compute that

$$\begin{aligned} R^{p+1}\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=(R(e_{k-1},e_k)\cdot R^p\omega )(\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=-R^p\omega (R(e_{k-1},e_k)e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad -R^p\omega (e_1,R(e_{k-1},e_k)e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad -\ldots \\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,R(e_{k-1},e_k)e_1,e_{k-1},e_1,e_2)\\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,e_1,R(e_{k-1},e_k)e_{k-1},e_1,e_2)\\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_{k-1},e_k)e_1,e_2)\\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,R(e_{k-1},e_k)e_2)\\&=-\varepsilon R^p\omega (e_k,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad -\varepsilon R^p\omega (e_1,e_{k-1},e_k,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad -\ldots \\&\quad -\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_k,e_{k-1},e_1,e_2)\\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},\varepsilon \alpha e_{k-1}+\varepsilon e_k,e_2)\\&\quad -R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,-\varepsilon \alpha e_k). \end{aligned}$$

Before we proceed we shall show that for \(p\ge 2\)

$$\begin{aligned} R^p\omega (\overbrace{e_1,e_{k-1}}^{(1)},\ldots ,\overbrace{e_k,e_{k-1}}^{(i)},\ldots ,\overbrace{e_1,e_{k-1}}^{(p)},e_1,e_2)=0 \end{aligned}$$

if only \(i\in \{2,\ldots ,p\}\). Indeed we have

$$\begin{aligned} R^p\omega&(\overbrace{e_1,e_{k-1}}^{(1)},\ldots ,\overbrace{e_k,e_{k-1}}^{(i)},\ldots ,\overbrace{e_1,e_{k-1}}^{(p)},e_1,e_2)\\&=-R^{p-1}\omega (R(e_1,e_{k-1})e_1,e_{k-1},\ldots )\\&\quad -R^{p-1}\omega (e_1,R(e_1,e_{k-1})e_{k-1},\ldots )\\&\quad -\cdots \\&\quad -R^{p-1}\omega (\ldots ,\overbrace{R(e_1,e_{k-1})e_k,e_{k-1}}^{(i-1)},\ldots )\\&\quad -R^{p-1}\omega (\ldots ,\overbrace{e_k,R(e_1,e_{k-1})e_{k-1}}^{(i-1)},\ldots )\\&\quad -\cdots \\&\quad -R^{p-1}\omega (\ldots ,R(e_1,e_{k-1})e_1,e_2)\\&\quad -R^{p-1}\omega (\ldots ,e_1,R(e_1,e_{k-1})e_2)\\&=-R^{p-1}\omega (\ldots ,\overbrace{R(e_1,e_{k-1})e_k,e_{k-1}}^{(i-1)},\ldots )\\&\quad -R^{p-1}\omega (\ldots ,e_1,R(e_1,e_{k-1})e_2) \end{aligned}$$

thanks to (20). Now formulas (21) and (22) imply that

$$\begin{aligned} R^p\omega (\overbrace{e_1,e_{k-1}}^{(1)},\ldots ,\overbrace{e_k,e_{k-1}}^{(i)},\ldots ,\overbrace{e_1,e_{k-1}}^{(p)},e_1,e_2)=0. \end{aligned}$$
(33)

Now applying (33) if needed (that is if \(p>1\)) we conclude that

$$\begin{aligned} R^{p+1}\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=\varepsilon R^p\omega (e_{k-1},e_k,e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad -\varepsilon \alpha R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_{k-1},e_2)\\&\quad -\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_k,e_2)\\&\quad +\varepsilon \alpha R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k)\\&=\varepsilon R^p\omega (e_{k-1},e_k,e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_2)\\&\quad +\varepsilon \alpha R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_{k-1})\\&\quad +\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad +\varepsilon \alpha R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_k).\\ \end{aligned}$$

By the induction principle, using formula (27) and formula (28) (for \(i=1\)) we obtain

$$\begin{aligned} R^{p+1}\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=\varepsilon \Big (\varepsilon ^p\alpha (\omega (e_1,e_k)+\omega (e_2,e_{k-1}))+\varepsilon ^p\omega (e_2,e_k)\Big )\\&\quad +\varepsilon \alpha \Big ( (-1)^p\varepsilon ^p(\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1}))\Big )\\&\quad +\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k)\\&\quad +\varepsilon \alpha \Big (\varepsilon ^p(\alpha \omega (e_1,e_{k-1})+\omega (e_1,e_k))\Big )\\&=\varepsilon ^{p+1}\Big (2\alpha \omega (e_1,e_k)+(\alpha +(-1)^p\alpha )\omega (e_2,e_{k-1})\\&\qquad \qquad +(\alpha ^2+(-1)^p\alpha ^2)\omega (e_1,e_{k-1})+\omega (e_2,e_{k})\Big )\\&\quad +\varepsilon R^p\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_2,e_k). \end{aligned}$$

Finally we need to consider two cases. If p is odd, using Lemma 3.5, we obtain

$$\begin{aligned} R^{p+1}\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=\varepsilon ^{p+1}\Big (2\alpha \omega (e_1,e_k)+(\alpha +(-1)^p\alpha )\omega (e_2,e_{k-1})\\&\quad +(\alpha ^2+(-1)^p\alpha ^2)\omega (e_1,e_{k-1})+\omega (e_2,e_{k})\Big )\\&\quad +\varepsilon (-\varepsilon \alpha (\omega (e_1,e_k)-\omega (e_2,e_{k-1})))\\&=\alpha (\omega (e_1,e_{k})+\omega (e_2,e_{k-1}))+\omega (e_2,e_{k}). \end{aligned}$$

If p is even, again from Lemma 3.5 we get

$$\begin{aligned} R^{p+1}\omega&(e_{k-1},e_k,\underbrace{e_1,e_{k-1},\ldots ,e_1,e_{k-1}}_{2p},e_1,e_2)\\&=\varepsilon ^{p+1}\Big (2\alpha \omega (e_1,e_k)+(\alpha +(-1)^p\alpha )\omega (e_2,e_{k-1})\\&\quad +(\alpha ^2+(-1)^p\alpha ^2)\omega (e_1,e_{k-1})+\omega (e_2,e_{k})\Big )\\&\quad +\varepsilon (-\alpha (2\alpha \omega (e_1,e_{k-1})+\omega (e_2,e_{k-1})+\omega (e_1,e_{k})))\\&=\varepsilon \alpha (\omega (e_1,e_k)+\omega (e_2,e_{k-1}))+\varepsilon \omega (e_2,e_k). \end{aligned}$$

The proof is completed. \(\square \)

Lemma 3.7

If \(k>3\) then for every \(p\ge 1\) we have

$$\begin{aligned} R^{p}\omega (e_1,e_{k-1},e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_3)=0 \end{aligned}$$
(34)

Proof

If \(p=1\) we have

$$\begin{aligned} R\omega&(e_1,e_{k-1},e_1,e_3)=-\omega (R(e_1,e_{k-1})e_1,e_3)-\omega (e_1,R(e_1,e_{k-1})e_3). \end{aligned}$$

For \(p>1\) we have in general

$$\begin{aligned} R^{p}\omega&(e_1,e_{k-1},e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_3)\\&=-R^{p-1}\omega (R(e_1,e_{k-1})e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_3)\\&\quad -R^{p-1}\omega (e_1,R(e_1,e_{k-1})e_{k-1},\ldots ,e_1,e_{k-1},e_1,e_3)\\&\quad -\ldots \\&\quad -R^{p-1}\omega (e_1,e_{k-1},\ldots ,e_1,R(e_1,e_{k-1})e_{k-1},e_1,e_3)\\&\quad -R^{p-1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},R(e_1,e_{k-1})e_1,e_3)\\&\quad -R^{p-1}\omega (e_1,e_{k-1},\ldots ,e_1,e_{k-1},e_1,R(e_1,e_{k-1})e_3). \end{aligned}$$

Now the thesis follows immediately from (20) and the fact that for \(k>3\) we always have \(R(e_1,e_{k-1})e_3=0\). \(\square \)

Lemma 3.8

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(R^p\omega =0\) for some \(p\ge 1\) and

$$\begin{aligned} S=\left[ \begin{matrix} S_1 &{}\quad 0 &{}\quad \ldots &{} 0 \\ 0 &{}\quad S_2 &{}\quad \ldots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad S_{q+r} \end{matrix}\right] \end{aligned}$$
(35)

is the Jordan decomposition of S as stated in the Lemma 2.4 then \(\dim S_1\le 3\).

Proof

Let us assume that \(S_1\) has the form (15) and \(k>3\). If \(\alpha \ne 0\), from Corollary 3.2 we obtain

$$\begin{aligned} \omega (e_2,e_k)=\ldots =\omega (e_{2n},e_k)=0. \end{aligned}$$

From Lemma 3.5, formula (30) we have

$$\begin{aligned} \omega (e_1,e_k)=\omega (e_2,e_{k-1}). \end{aligned}$$

From Lemma 3.6, formula (32) we get

$$\begin{aligned} \omega (e_1,e_k)=-\omega (e_2,e_{k-1}), \end{aligned}$$

since \(\omega (e_2,e_k)=0\). Therefore \(\omega (e_i,e_k)=0\) for \(i\in \{1,\ldots ,2n\}\) what contradicts assumption that \(\omega \) is non-degenerate. Thus it must be \(\alpha =0\).

When \(\alpha =0\) from Lemma 3.4, formula (28) we get \(\omega (e_i,e_k)=0\) for \(i\in \{1,2,\ldots ,2n\}\setminus \{2,k\}\). Of course \(\omega (e_k,e_k)=0\) as well. From Lemma 3.6, formula (32) we have that also \(\omega (e_2,e_k)=0\), so again we obtain that \(\omega \) is degenerate thus \(\dim S_1\) cannot exceed 3. \(\square \)

Lemma 3.9

If \(S_1\) is a 3-dimensional real Jordan block, \(p\ge 1\) then

$$\begin{aligned} R^p\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2) =(-1)^p\varepsilon ^p p!\omega (e_1,e_2) \end{aligned}$$
(36)

Proof

From the Gauss equation we have

$$\begin{aligned} R(e_1,e_2)e_1=0, \quad R(e_1,e_2)e_2=\varepsilon S_1e_1=\varepsilon \alpha e_1+\varepsilon e_2. \end{aligned}$$
(37)

Now, for \(p=1\) we easily check that

$$\begin{aligned} R\omega (e_1,e_2,e_1,e_2)=-\varepsilon \omega (e_1,e_2). \end{aligned}$$

Let us assume that (36) is true for some \(p\ge 1\). Using (37) we compute

$$\begin{aligned} R^{p+1}\omega (e_1&,e_2,e_1,e_2,\ldots ,e_1,e_2)\\&=(R(e_1,e_{2})\cdot R^p\omega )(\underbrace{e_1,e_{2},\ldots ,e_1,e_{2}}_{2p},e_1,e_2)\\&=-R^p\omega (e_1,\varepsilon S_1e_1,\ldots ,e_1,e_{2},e_1,e_2)\\&\quad -R^p\omega (e_1,e_2,e_1,\varepsilon S_1e_1,\ldots ,e_1,e_2)\\&\quad \ldots \\&\quad -R^p\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,\varepsilon S_1e_1,e_1,e_2)\\&\quad -R^p\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,\varepsilon S_1e_1)\\&=-\varepsilon (p+1)R^p\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2)\\&=-\varepsilon (p+1)\cdot (-1)^p\varepsilon ^p p!\omega (e_1,e_2)=(-1)^{p+1}\varepsilon ^{p+1} (p+1)!\omega (e_1,e_2) \end{aligned}$$

Now by the induction principle (36) holds for all \(p\ge 1\). \(\square \)

Lemma 3.10

If \(S_1\) is a 3-dimensional real Jordan block, \(p\ge 2\) and \(i,j>3\) then

$$\begin{aligned} R^p\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_i,e_1,e_j)\nonumber \\&=(-1)^p\varepsilon ^{p-1} (p-1)!h(e_i,e_j)(2\alpha \omega (e_1,e_2)+\omega (e_1,e_3)) \end{aligned}$$
(38)

Proof

Using the Gauss equetion let us note that

$$\begin{aligned} R^2\omega (e_1,e_2,e_2,e_i,e_1,e_j)&=\varepsilon h(e_i,e_j)(2\alpha \omega (e_1,e_2)+\omega (e_1,e_3)), \end{aligned}$$
(39)
$$\begin{aligned} R^2\omega (e_1,e_2,e_1,e_i,e_1,e_j)&=0 \end{aligned}$$
(40)

Now, for \(p\ge 2\) we have

$$\begin{aligned}{} & {} R^{p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i,e_1,e_j) \\{} & {} \quad =-\varepsilon (p-1)R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i,e_1,e_j) \end{aligned}$$

since \(R(e_1,e_2)e_1=R(e_1,e_2)e_i=R(e_1,e_2)e_j=0\) and \(R(e_1,e_2)e_2=\varepsilon S_1 e_1=\varepsilon \alpha e_1+\varepsilon e_2\). In consequence, by the induction principle

$$\begin{aligned} R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i,e_1,e_j)=0 \end{aligned}$$
(41)

for all \(p\ge 2\). We also compute

$$\begin{aligned}&R^{p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i,e_1,e_j)\\&\quad =-\varepsilon (p-1)R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i,e_1,e_j)\\&\qquad -R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_2,e_i,e_1,e_j)\\&\quad =-\varepsilon p R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i,e_1,e_j)\\&\qquad -\varepsilon \alpha R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i,e_1,e_j)\\&\quad =-\varepsilon p R^{p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i,e_1,e_j), \end{aligned}$$

where the last equality follows from (41). The induction principle implies that (38) holds for all \(p\ge 2\). \(\square \)

Lemma 3.11

If \(S_1\) is a 3-dimensional real Jordan block, \(p\ge 1\) and \(\alpha =0\) then

$$\begin{aligned} R^p\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_i) =(-1)^p\varepsilon ^p p!\omega (e_2,e_i) \end{aligned}$$
(42)

for any \(i \in \{1,\ldots ,2n\}\setminus \{3\}\).

Proof

First notice that (42) holds trivially for \(i=2\). Now we assume that \(i \in \{1,\ldots ,2n\}\setminus \{2,3\}\). Note that from the Gauss equation and assumption \(\alpha =0\) we have

$$\begin{aligned} R(e_1,e_2)e_1=R(e_1,e_2)e_i=0, \quad R(e_1,e_2)e_2=\varepsilon S_1e_1=\varepsilon e_2. \end{aligned}$$
(43)

For \(p=1\) we easily get

$$\begin{aligned} R\omega (e_1,e_2,e_2,e_i)=-\varepsilon \omega (e_2,e_i). \end{aligned}$$

Let us assume that (42) holds for some \(p\ge 1\), we shall show that it also holds for \(p+1\). Indeed we obtain

$$\begin{aligned} R^{p+1}&\omega (\underbrace{e_1,e_2,\ldots ,e_1,e_2}_{2p+2},e_2,e_i)\\&=(R(e_1,e_2)\cdot R^{p}\omega )(\underbrace{e_1,e_2,\ldots ,e_1,e_2}_{2p},e_2,e_i)\\&=-\varepsilon (p+1)R^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i)\\&=-\varepsilon (p+1)\cdot (-1)^p\varepsilon ^p p!\omega (e_2,e_i)=(-1)^{p+1}\varepsilon ^{p+1}(p+1)!\omega (e_2,e_i) \end{aligned}$$

thanks to (43). Now by the induction principle (42) holds for all \(p\ge 1\). \(\square \)

Lemma 3.12

If \(S_1\) is a 3-dimensional real Jordan block, \(p\ge 1\) and \(\alpha =0\) then

$$\begin{aligned} R^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_2) =(-1)^{p+1}\varepsilon ^p (p-1)!\omega (e_2,e_3). \end{aligned}$$
(44)

Proof

First we notice that (44) is satisfied for \(p=1\). Indeed, since

$$\begin{aligned} R(e_2,e_3)e_1=\varepsilon S_1e_2=\varepsilon e_3,\quad R(e_2,e_3)e_2=-\varepsilon S_1e_3=0 \end{aligned}$$

we easily obtain

$$\begin{aligned} R\omega (e_2,e_3,e_1,e_2)=-\omega (R(e_2,e_3)e_1,e_2)-\omega (e_1,R(e_2,e_3)e_2)=\varepsilon \omega (e_2,e_3). \end{aligned}$$

Now assume that (44) holds for some \(p\ge 1\), we shall show that it also holds for \(p+1\). From the Gauss equation and our assumptions we have

$$\begin{aligned} R(e_1,e_2)e_1=0, \quad R(e_1,e_2)e_2=\varepsilon S_1e_1=\varepsilon e_2, \quad R(e_1,e_2)e_3=-\varepsilon S_1e_2=-\varepsilon e_3. \end{aligned}$$

Now we obtain

$$\begin{aligned} R^{p+1}&\omega (\underbrace{e_1,e_2,\ldots ,e_1,e_2}_{2p},e_2,e_3,e_1,e_2)\\&=(R(e_1,e_2)\cdot R^{p}\omega )(\underbrace{e_1,e_2,\ldots ,e_1,e_2}_{2p-2},e_2,e_3,e_1,e_2)\\&=-\varepsilon (p+1)R^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_2)\\&\quad -R^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,R(e_1,e_2)e_3,e_1,e_2)\\&=-\varepsilon pR^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_2)\\&=-\varepsilon p\cdot (-1)^{p+1}\varepsilon ^p (p-1)!\omega (e_2,e_3)=(-1)^{p+2}\varepsilon ^{p+1}p!\omega (e_2,e_3). \end{aligned}$$

Now by the induction principle (44) holds for all \(p\ge 1\). \(\square \)

Lemma 3.13

Let us assume that \(S_1\), \(S_2\) (from Lemma 2.4) are 2-dimensional real Jordan blocks. That is

$$\begin{aligned} S_1=\left[ \begin{matrix} \alpha &{}\quad 0\\ 1 &{}\quad \alpha \end{matrix}\right] , \quad S_2=\left[ \begin{matrix} \beta &{}\quad 0\\ 1 &{}\quad \beta \end{matrix}\right] , \end{aligned}$$

where \(\alpha , \beta \in \mathbb {R}\). We also assume that \(H_1, H_2\) have the form

$$\begin{aligned} H_1=\left[ \begin{matrix} 0 &{}\quad \varepsilon \\ \varepsilon &{}\quad 0 \end{matrix}\right] , \quad H_2=\left[ \begin{matrix} 0 &{}\quad \eta \\ \eta &{}\quad 0 \end{matrix}\right] , \end{aligned}$$

where \(\varepsilon ,\eta \in \{-1,1\}\).

Then for every \(i\in \{1,\ldots ,2n\}\setminus \{2,4\}\) we have

$$\begin{aligned}&R^p\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_3)=0 \quad \text {for }p\ge 1, \end{aligned}$$
(45)
$$\begin{aligned} R^{2p+1}\omega&(e_1,e_3,\ldots ,e_1,e_3,e_i,e_4)\nonumber \\&=(-1)^{p+1}\eta ^{p+1}\varepsilon ^p(\alpha \omega (e_i,e_1)+\omega (e_i,e_2)) \quad \text {for }p\ge 0. \end{aligned}$$
(46)

Proof

First note that from the Gauss equation we easily obtain that \(R(e_1,e_3)e_i =0\) for \(i\in \{1,\ldots ,2n\}\setminus \{2,4\}\). In particular

$$\begin{aligned} R(e_1,e_3)e_1=R(e_1,e_3)e_3=0. \end{aligned}$$
(47)

Thus (45) follows immediately.

In order to prove (46) first notice that (46) is satisfied for \(p=0\), since \(R(e_1,e_3)e_i=0\) and

$$\begin{aligned} R(e_1,e_3)e_4=\eta S_1e_1=\eta \alpha e_1+\eta e_2. \end{aligned}$$
(48)

Now assume that (46) holds for some \(p\ge 0\), we shall show that it also holds for \(p+1\). From the Gauss equation we also have

$$\begin{aligned} R(e_1,e_3)e_2=-\varepsilon S_2e_3=-\varepsilon \beta e_3-\varepsilon e_4. \end{aligned}$$
(49)

Now using (47)–(49) we obtain

$$\begin{aligned} R^{2p+3}&\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_4)\\&=-\eta \alpha \underbrace{R^{2p+2}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_1)}_{0}\\&\quad -\eta R^{2p+2}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_2)\\&=\eta R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,R(e_1,e_3)e_2)\\&=-\eta \varepsilon \beta R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_3)\\&\quad -\eta \varepsilon R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_4)\\&=-\eta \varepsilon R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_4), \end{aligned}$$

where the last equality follows from (45). Finally we have

$$\begin{aligned} R^{2p+3}&\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_4)\\&=-\eta \varepsilon R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_i,e_4)\\&=-\eta \varepsilon (-1)^{p+1}\eta ^{p+1}\varepsilon ^{p}(\alpha \omega (e_i,e_1)+\omega (e_i,e_2))\\&=(-1)^{p+2}\eta ^{p+2}\varepsilon ^{p+1}(\alpha \omega (e_i,e_1)+\omega (e_i,e_2)). \end{aligned}$$

By the induction principle (46) holds for all \(p\ge 0\). \(\square \)

Lemma 3.14

Let \(S_1\), \(S_2\) be 2-dimensional real Jordan blocks like in Lemma 3.13. If \(\alpha = \beta =0\) then

$$\begin{aligned} R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_2)&=(-1)^p(\varepsilon \eta )^{p-1}\cdot 2^{2p-2}\omega (e_2,e_4), \end{aligned}$$
(50)
$$\begin{aligned} R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_4)&=(-1)^{p+1}(\varepsilon \eta )^{p}\cdot 2^{2p-2}\omega (e_2,e_4) \end{aligned}$$
(51)

for every \(p\ge 1\).

Proof

First note that formulas (47), (48) and (49) from the proof of Lemma 3.13 are still valid in our case. Using them and taking into account that \(\alpha = \beta =0\) we easily compute that

$$\begin{aligned} R^{2}\omega (e_1,e_3,e_1,e_2,e_1,e_2)=-\varepsilon ^2\omega (e_2,e_4)=-\omega (e_2,e_4) \end{aligned}$$

and

$$\begin{aligned} R^{2}\omega (e_1,e_3,e_1,e_4,e_1,e_4)=\varepsilon \eta \omega (e_2,e_4). \end{aligned}$$

That is (50) and (51) are true for \(p=1\). Now assume that (50) and (51) hold for some \(p\ge 1\). Again using (47)–(49) and the fact that \(\alpha = \beta =0\) we obtain

$$\begin{aligned} R^{2p+2}\omega&(e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_2)\\&=-R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,R(e_1,e_3)e_2,e_1,e_2)\\&\quad -R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,R(e_1,e_3)e_2)\\&=\varepsilon R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_2)\\&\quad +\varepsilon R^{2p+1}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_4)\\&=-\varepsilon R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,R(e_1,e_3)e_4,e_1,e_2)\\&\quad -\varepsilon R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,R(e_1,e_3)e_2)\\&\quad -\varepsilon R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,R(e_1,e_3)e_2,e_1,e_4)\\&\quad -\varepsilon R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,R(e_1,e_3)e_4)\\&=-2\varepsilon \eta R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_2)\\&\quad +2\varepsilon ^2 R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_4)\\&=2\varepsilon ^2R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_4)\\&\quad -2\varepsilon \eta R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_2)\\&=2(\varepsilon ^2 (-1)^{p{+}1}(\varepsilon \eta )^{p}\cdot 2^{2p-2}\omega (e_2,e_4){-}\varepsilon \eta (-1)^{p}(\varepsilon \eta )^{p-1}\cdot 2^{2p-2}\omega (e_2,e_4))\\&=(-1)^{p+1}(\varepsilon \eta )^p\cdot 2^{2p}\omega (e_2,e_4). \end{aligned}$$

In a similar way we show that

$$\begin{aligned} R^{2p+2}\omega&(e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_4)\\&=2\eta ^2 R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_2,e_1,e_2)\\&\quad -2\varepsilon \eta R^{2p}\omega (e_1,e_3,\ldots ,e_1,e_3,e_1,e_4,e_1,e_4)\\&=2\eta ^2 (-1)^p(\varepsilon \eta )^{p-1}\cdot 2^{2p-2}\omega (e_2,e_4)\\&\quad -2\varepsilon \eta (-1)^{p+1}(\varepsilon \eta )^{p}\cdot 2^{2p-2}\omega (e_2,e_4)\\&=(-1)^{p+2}2^{2p}(\varepsilon \eta )^{p+1}\omega (e_2,e_4) \end{aligned}$$

Now by the induction principle (50) and (51) hold for all \(p\ge 1\). \(\square \)

Theorem 3.15

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). Let

$$\begin{aligned} S=\left[ \begin{matrix} S_1 &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ 0 &{}\quad S_2 &{}\quad \ldots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad S_{q+r} \end{matrix}\right] \end{aligned}$$
(52)

be the Jordan decomposition of S as stated in the Lemma 2.4. If \(R^p\omega =0\) for some \(p\ge 1\) then \(\dim S_1\le 2\) and \(\dim S_i=1\) for \(i=2,\ldots ,q\).

Proof

By Lemma 3.8\(\dim S_1\le 3\). If \(\dim S_1=3\), using Corollary 3.2 we obtain

$$\begin{aligned} \varepsilon ^p\alpha ^p\omega (e_2,e_3)=\ldots =\varepsilon ^p\alpha ^p\omega (e_{2n},e_3)=0. \end{aligned}$$
(53)

By Lemma 3.9\(\omega (e_1,e_2)=0\). Now using Lemma 3.10 we obtain

$$\begin{aligned} h(e_i,e_j)\omega (e_1,e_3)=0 \end{aligned}$$

for \(i,j>3\). Since \(\dim M\ge 4\) and h is non-degenerate there exist \(i,j>3\) such that \(h(e_i,e_j)\ne 0\) and in consequence \(\omega (e_1,e_3)=0\). If \(\alpha \ne 0\) then from (53) we get \(\omega (e_2,e_3)=\ldots =\omega (e_{2n},e_3)=0\), so \(\omega \) is degenerate. That is we must have \(\alpha =0\). Now using Lemmas 3.11 and 3.12 we show that \(\omega (e_2,e_i)=0\) for \(i=1,\ldots ,2n\) that is \(\omega \) is degenerate again. In consequence the case \(\dim S_1=3\) is not possible.

Assume now that \(\dim S_1=2\). If \(\dim S_2=2\) then

$$\begin{aligned} S_1=\left[ \begin{matrix} \alpha &{}\quad 0\\ 1 &{}\quad \alpha \end{matrix}\right] , \quad S_2=\left[ \begin{matrix} \beta &{}\quad 0\\ 1 &{}\quad \beta \end{matrix}\right] , \end{aligned}$$

where \(\alpha , \beta \in \mathbb {R}\). If \(R^p\omega =0\) for some \(p\ge 1\) then also \(R^{2p+1}\omega =0\). Now using Lemma 3.13 we get

$$\begin{aligned} \alpha \omega (e_i,e_1)+\omega (e_i,e_2)=0 \end{aligned}$$

for \(i\in \{1,\ldots ,2n\}\setminus \{2,4\}\). In particular, (for \(i=1\)) we get that \(\omega (e_1,e_2)=0\). On the other hand by Corollary 3.2 we have

$$\begin{aligned} \varepsilon ^p\alpha ^p\omega (e_3,e_2)=\ldots =\varepsilon ^p\alpha ^p\omega (e_{2n},e_2)=0. \end{aligned}$$

Note that case \(\alpha \ne 0\) is not possible since then \(\omega (e_i,e_2)=0\) for all \(i\in \{1,\ldots ,2n\}\) and \(\omega \) is degenerate. Thus we must have \(\alpha =0\). In this case Lemma 3.13 implies that \(\omega (e_i,e_2)=0\) for \(i\in \{1,\ldots ,2n\}\setminus \{2,4\}\). Since, without loss of generality, we can exchange \(S_1\) with \(S_2\) we also get that \(\beta =0\). Now by Lemma 3.14 we get that \(\omega (e_2,e_4)=0\) and \(\omega \) is degenerate, so this case is also not possible. Summarising we must have \(\dim S_1\le 2\) and \(\dim S_i=1\) for \(i=2,\ldots ,q\), what completes the proof. \(\square \)

4 Complex Jordan Blocks

In this chapter we study properties of complex Jordan blocks of the shape operator S. Before we proceed, to simplify proofs in this chapter, we need to do slight modification in the notation of Lemma 2.4.

Let \(\{e_1,\ldots ,e_{2n}\}\) be the basis of \(T_xM\) from Lemma 2.4. Without loss of generality rearranging and renaming vectors \(e_1,\ldots ,e_{2n}\) we can change order of \(S_i\) and \(H_i\) in such way that \(S_{1},\ldots ,S_{r}\) will be complex blocks and \(S_{r+1},\ldots ,S_{r+q}\) will be real blocks. If we assume that \(S_1\) is a 2k-dimensional block, \(k\ge 1\) in the new notation we will have

$$\begin{aligned} S_1=\left[ \begin{matrix} \alpha &{} \quad \beta &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0 \\ -\beta &{} \quad \alpha &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad \alpha &{} \quad \beta &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 1 &{} \quad -\beta &{} \quad \alpha &{} \quad \ldots &{} \quad 0 &{} \quad 0\\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \ldots &{} \quad \ldots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad \alpha &{} \quad \beta \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad -\beta &{} \quad \alpha \end{matrix}\right] \in M(2k,2k,\mathbb {R}), \end{aligned}$$

where \(\alpha , \beta \in \mathbb {R}\), \(\beta \ne 0\) and

$$\begin{aligned} H_1=\left[ \begin{matrix} 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 1 \\ 0 &{} \quad 0 &{} \quad \cdots &{} \quad 1 &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \cdots &{} \quad \cdots \\ 0 &{} \quad 1 &{} \quad \cdots &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 0 \end{matrix}\right] \in M(2k,2k,\mathbb {R}). \end{aligned}$$

Moreover, vectors \(\{e_1,\ldots ,e_{2k}\}\) will be a basis for \(S_1\).

In all the below lemmas (if not stated otherwise) we always assume that \(S_1\) and \(H_1\) are as above.

Let us start with the following three lemmas related to 2-dimensional complex Jordan blocks.

Lemma 4.1

If \(S_1\) is a 2-dimensional complex Jordan block then for every \(i\in \{3,\ldots ,2n\}\) we have

$$\begin{aligned} R(e_1,e_2)e_1&=Se_1=\alpha e_1 -\beta e_2,\\ R(e_1,e_2)e_2&=-Se_2=-\beta e_1-\alpha e_2,\\ R(e_1,e_2)e_i&=0. \end{aligned}$$

Proof

Proof easily follows from the Gauss equation and the fact that \(h(e_1,e_i)=h(e_2,e_i)=0\) for \(i>2\). \(\square \)

Lemma 4.2

If \(S_1\) is a 2-dimensional complex Jordan block then for every \(p\ge 1\), \(i\in \{3,\ldots ,2n\}\) we have

$$\begin{aligned} R^{2p}\omega (e_1,e_{2},e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_i)=(\det S_1)^p\omega (e_1,e_i) \end{aligned}$$
(54)
$$\begin{aligned} R^{2p}\omega (e_1,e_{2},e_1,e_{2},\ldots ,e_1,e_{2},e_2,e_i)=(\det S_1)^p\omega (e_2,e_i) \end{aligned}$$
(55)

Proof

We shall prove (54). For \(p=1\), using Lemma 4.1 we compute

$$\begin{aligned} R^2\omega (e_1,e_2,&e_1,e_2,e_1,e_i)\\&=-R\omega (R(e_1,e_2)e_1,e_2,e_1,e_i)-R\omega (e_1,R(e_1,e_2)e_2,e_1,e_i)\\&\quad -R\omega (e_1,e_2,R(e_1,e_2)e_1,e_i)-R\omega (e_1,e_2,e_1,R(e_1,e_2)e_i)\\&=-\alpha R\omega (e_1,e_2,e_1,e_i)+\alpha R\omega (e_1,e_2,e_1,e_i)\\&\quad -R\omega (e_1,e_2,\alpha e_1-\beta e_2,e_i)\\&=-\alpha R\omega (e_1,e_2,e_1,e_i)+\beta R\omega (e_1,e_2,e_2,e_i)\\&=-\alpha (-\omega (R(e_1,e_2)e_1,e_i)-\omega (e_1,R(e_1,e_2)e_i))\\&\quad +\beta (-\omega (R(e_1,e_2)e_2,e_i)-\omega (e_2,R(e_1,e_2)e_i))\\&=\alpha ^2\omega (e_1,e_i)-\alpha \beta \omega (e_2,e_i)+\beta ^2\omega (e_1,e_i)+\beta \alpha \omega (e_2,e_i)\\&=(\alpha ^2+\beta ^2)\omega (e_1,e_i). \end{aligned}$$

Assume that (54) holds for some \(p\ge 1\) we shall show that it also holds for \(p+1\). Indeed, using Lemma 4.1 we have

$$\begin{aligned}&R^{2p+2}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&=-R^{2p+1}\omega (R(e_1,e_2)e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad -R^{2p+1}\omega (e_1,R(e_1,e_2)e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad -\ldots \\&\quad -R^{2p+1}\omega (e_1,e_2,\ldots ,R(e_1,e_2)e_1,e_2,e_1,e_i)\\&\quad -R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,R(e_1,e_2)e_2,e_1,e_i)\\&\quad -R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_1,e_i)\\&\quad -R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,R(e_1,e_2)e_i)\\&=- \alpha R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)+\alpha R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad -\ldots \\&\quad -\alpha R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)+\alpha R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad -R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,\alpha e_1 -\beta e_2,e_i)-0\\&=-R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,\alpha e_1 -\beta e_2,e_i)\\&=-\alpha R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)+\beta R^{2p+1}\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i). \end{aligned}$$

Now we compute that

$$\begin{aligned} R^{2p+1}&\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&=-R^{2p}\omega (R(e_1,e_2)e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad -R^{2p}\omega (e_1,R(e_1,e_2)e_2,\ldots ,e_1,e_2,e_1,e_i)\\&\quad \ldots \\&\quad -R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_1,e_i)\\&\quad -R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,R(e_1,e_2)e_i)\\&=-R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_1,e_i). \end{aligned}$$

Similarly

$$\begin{aligned} R^{2p+1}&\omega (e_1,e_2,\ldots ,e_1,e_2,e_2,e_i)=-R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_2,e_i). \end{aligned}$$

Thus we obtain

$$\begin{aligned} R^{2p+2}&\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&=\alpha R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,\alpha e_1-\beta e_2,e_i)\\&\quad -\beta R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,-\beta e_1-\alpha e_2,e_i)\\&=\alpha ^2R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)+\beta ^2 R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)\\&=\det S_1\cdot R^{2p}\omega (e_1,e_2,\ldots ,e_1,e_2,e_1,e_i)=(\det S_1)^{p+1}\omega (e_1,e_i). \end{aligned}$$

Now by the induction principle (54) holds for all \(p\ge 1\). The formula (55) can be shown in a similar way. \(\square \)

Lemma 4.3

If \(S_1\) is a 2-dimensional complex Jordan block then for every \(p\ge 1\), \(i,j >2\) we have

$$\begin{aligned} R^{2p}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},e_2,e_j)&=2^{2p-1}\beta ^2(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2) \end{aligned}$$
(56)
$$\begin{aligned} R^{2p}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_2,e_{i},e_1,e_j)&=2^{2p-1}\beta ^2(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2) \end{aligned}$$
(57)
$$\begin{aligned} R^{2p}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},e_1,e_j)&-R^{2p}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_2,e_{i},e_2,e_j) \nonumber \\&=-2^{2p}\alpha \beta (\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2) \end{aligned}$$
(58)

Proof

To simplify computations let us denote

$$\begin{aligned} B^p(X,Y,i,j):=R^p\omega (e_1,e_{2},e_1,e_{2},\ldots ,e_1,e_{2},X,e_{i},Y,e_{j}) \end{aligned}$$
(59)

where \(X,Y\in {\text {span}}\{e_1,e_2\}\), \(i,j\in \{3,\dots ,2n\}\) and \(p\ge 1\).

First we shall show that (56)–(58) are true for \(p=1\). Indeed, using Lemma 4.1, we obtain

$$\begin{aligned} R^{2}&\omega (e_1,e_{2},e_1,e_{i},e_2,e_j)\\&=-R\omega (S e_1,e_{i},e_2,e_j)+R\omega (e_1,e_{i},S e_2,e_j)\\&=-\alpha R\omega (e_1,e_{i},e_2,e_j)+\beta R\omega (e_2,e_{i},e_2,e_j)\\&\quad +\beta R\omega (e_1,e_{i},e_1,e_j)+\alpha R\omega (e_1,e_{i},e_2,e_j)\\&=\beta (R\omega (e_2,e_{i},e_2,e_j)+R\omega (e_1,e_{i},e_1,e_j))\\&=\beta (-\omega (R(e_2,e_{i})e_2,e_j)-\omega (e_2,R(e_2,e_{i})e_j)\\&\quad -\omega (R(e_1,e_{i})e_1,e_j)-\omega (e_1,R(e_1,e_{i})e_j))\\&=-\beta (\omega (e_2,R(e_2,e_{i})e_j)+\omega (e_1,R(e_1,e_{i})e_j))\\&=-\beta h(e_i,e_j)\Big (\omega (e_2,S e_2)+\omega (e_1, S e_1)\Big )\\&=2\beta ^2 h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$

that is

$$\begin{aligned} B^2(e_1,e_2,i,j)=2\beta ^2 h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$
(60)

Exactly in the same way we show that

$$\begin{aligned} B^2(e_2,e_1,i,j)=2\beta ^2 h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$
(61)

For (58) we compute

$$\begin{aligned}&B^2(e_1,e_1,i,j)-B^2(e_2,e_2,i,j)\\&=-B^1(R(e_1,e_2)e_1,e_1,i,j)-B^1(e_1,R(e_1,e_2)e_1,i,j)\\&\quad +B^1(R(e_1,e_2)e_2,e_2,i,j)+B^1(e_2,R(e_1,e_2)e_2,i,j)\\&=-B^1(S e_1,e_1,i,j)-B^1(e_1,S e_1,i,j)\\&\quad -B^1(S e_2,e_2,i,j)-B^1(e_2,S e_2,i,j)\\&=-2\alpha (B^1(e_1,e_1,i,j)+B^1(e_2,e_2,i,j))\\&=-2\alpha (-h(e_i,e_j)\omega (e_1, S e_1)-h(e_i,e_j)\omega (e_2, S e_2))\\&=2\alpha h(e_i,e_j)(-\beta \omega (e_1,e_2)+\beta \omega (e_2,e_1))\\&=-4\alpha \beta h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$

Now assume that (56)–(58) are all true for some \(p\ge 1\). We compute

$$\begin{aligned} B^{2p+2}(e_1,e_2,i,j)&=R^{2p+2}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},e_2,e_j)\\&=-R^{2p+1}\omega (S e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},e_2,e_j)\\&\quad +R^{2p+1}\omega (e_1,S e_{2},\ldots ,e_1,e_{2},e_1,e_{i},e_2,e_j)\\&\quad \cdots \\&\quad -R^{2p+1}\omega (e_1,e_{2},\ldots ,S e_1,e_{2},e_1,e_{i},e_2,e_j)\\&\quad +R^{2p+1}\omega (e_1,e_{2},\ldots ,e_1,S e_{2},e_1,e_{i},e_2,e_j)\\&\quad -R^{2p+1}\omega (e_1,e_{2},\ldots ,e_1,e_{2},S e_1,e_{i},e_2,e_j)\\&\quad +R^{2p+1}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},S e_2,e_j)\\&=-R^{2p+1}\omega (e_1,e_{2},\ldots ,e_1,e_{2},S e_1,e_{i},e_2,e_j)\\&\quad +R^{2p+1}\omega (e_1,e_{2},\ldots ,e_1,e_{2},e_1,e_{i},S e_2,e_j)\\&=-B^{2p+1}(S e_1,e_2,i,j)+B^{2p+1}(e_1,S e_2,i,j), \end{aligned}$$

since for \(l=1,\ldots ,2p\), terms \(2l-1\) and 2l cancel each other. Now we easily compute that

$$\begin{aligned} B^{2p+1}(S e_1,e_2,i,j)=-B^{2p}(R(e_1,e_2)S e_1,e_2,i,j)+B^{2p}(S e_1,S e_2,i,j) \end{aligned}$$

and

$$\begin{aligned} B^{2p+1}(e_1,S e_2,i,j)=-B^{2p}(S e_1,S e_2,i,j)-B^{2p}(e_1,R(e_1,e_2)S e_2,i,j) \end{aligned}$$

In consequence

$$\begin{aligned} B^{2p+2}(e_1,e_2,i,j)&=-2 B^{2p}(S e_1,S e_2,i,j)+B^{2p}(R(e_1,e_2)S e_1,e_2,i,j)\\&\quad -B^{2p}(e_1,R(e_1,e_2)S e_2,i,j)\\&=-2\alpha \beta B^{2p}(e_1,e_1,i,j)+2\alpha \beta B^{2p}(e_2,e_2,i,j)\\&\quad -2\alpha ^2 B^{2p}(e_1,e_2,i,j)+2\beta ^2 B^{2p}(e_2,e_1,i,j)\\&\quad +(\alpha ^2+\beta ^2)B^{2p}(e_1,e_2,i,j)+(\alpha ^2+\beta ^2)B^{2p}(e_1,e_2,i,j)\\&=-2\alpha \beta (B^{2p}(e_1,e_1,i,j)-B^{2p}(e_2,e_2,i,j))\\&\quad +2\beta ^2(B^{2p}(e_1,e_2,i,j)+B^{2p}(e_2,e_1,i,j)), \end{aligned}$$

since \(R(e_1,e_2)S e_1=(\alpha ^2+\beta ^2)e_1\) and \(R(e_1,e_2)S e_2=-(\alpha ^2+\beta ^2)e_2\). Now using assumptions (56)–(58) we obtain

$$\begin{aligned} B^{2p+2}(e_1,e_2,i,j)&=-2\alpha \beta (-2^{2p}\alpha \beta (\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2))\\&\quad +4\beta ^2(2^{2p-1}\beta ^2(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2))\\&=2^{2p+1}(\alpha ^2\beta ^2+\beta ^4)(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2)\\&=2^{2p+1}\beta ^2(\det S_1)^{p}h(e_i,e_j)\omega (e_1,e_2) \end{aligned}$$

In a similar way we show that

$$\begin{aligned} B^{2p+2}(e_2,e_1,i,j)=2^{2p+1}\beta ^2(\det S_1)^{p}h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$

Eventually

$$\begin{aligned} B^{2p+2}(e_1,e_1,i,j)&-B^{2p+2}(e_2,e_2,i,j)\\&=-B^{2p+1}(R(e_1,e_2)e_1,e_1,i,j)-B^{2p+1}(e_1,R(e_1,e_2)e_1,i,j)\\&\quad +B^{2p+1}(R(e_1,e_2)e_2,e_2,i,j)+B^{2p+1}(e_2,R(e_1,e_2)e_2,i,j)\\&=-B^{2p+1}(S e_1,e_1,i,j)-B^{2p+1}(e_1,S e_1,i,j)\\&\quad -B^{2p+1}(S e_2,e_2,i,j)-B^{2p+1}(e_2,S e_2,i,j)\\&=-2\alpha (B^{2p+1}(e_1,e_1,i,j)+B^{2p+1}(e_2,e_2,i,j))\\&=-2\alpha (-B^{2p}(S e_1,e_1,i,j)-B^{2p}(e_1,S e_1,i,j)\\&\quad +B^{2p}(S e_2,e_2,i,j)+B^{2p}(e_2,S e_2,i,j))\\&=-2\alpha (-2\alpha (B^{2p}(e_1,e_1,i,j)-B^{2p}(e_2,e_2,i,j))+4\beta B^{2p}(e_1,e_2,i,j))\\&=4\alpha ^2(B^{2p}(e_1,e_1,i,j)-B^{2p}(e_2,e_2,i,j))-8\alpha \beta B^{2p}(e_1,e_2,i,j). \end{aligned}$$

Using (56)–(58) we get

$$\begin{aligned} B^{2p+2}(e_1,e_1,i,j)&-B^{2p+2}(e_2,e_2,i,j)\\&=4\alpha ^2(B^{2p}(e_1,e_1,i,j)-B^{2p}(e_2,e_2,i,j))-8\alpha \beta B^{2p}(e_1,e_2,i,j)\\&=4\alpha ^2(-2^{2p}\alpha \beta (\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2))\\&\quad -8\alpha \beta \cdot 2^{2p-1}\beta ^2(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2)\\&=-2^{2p+2}(\alpha ^3\beta +\alpha \beta ^3)(\det S_1)^{p-1}h(e_i,e_j)\omega (e_1,e_2)\\&=-2^{2p+2}\alpha \beta (\det S_1)^{p}h(e_i,e_j)\omega (e_1,e_2). \end{aligned}$$

Now by the induction principle (56)–(58) hold for all \(p\ge 1\). \(\square \)

In the next three lemmas we study properties of complex Jordan block of dimension greater than 2 in relation to other Jordan blocks from the decomposition. Thus in these lemmas, we implicitly assume that the Jordan decomposition contains more than one (not necessarily complex) block.

Lemma 4.4

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(i\in \{1,\ldots ,2k-1\}\setminus \{3\}\), \(s\in \{1,\ldots ,2k\}\), \(j>2k\), \(p\ge 1\). Then

$$\begin{aligned}&R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,e_s,e_1,e_j)\nonumber \\&\quad = {\left\{ \begin{array}{ll} 0, &{}\quad \text {for}\quad s<2k\\ -\omega (S_1e_i,e_j) &{}\quad \text {for}\quad s=2k. \end{array}\right. } \end{aligned}$$
(62)

Proof

First let us notice that

$$\begin{aligned} R(e_i,e_s)e_1= {\left\{ \begin{array}{ll} 0, &{}\quad \text {for}\quad s<2k\\ S_1e_i, &{}\quad \text {for}\quad s=2k \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} R(e_i,e_s)e_j=0. \end{aligned}$$

From the above properties we obtain

$$\begin{aligned} R\omega (e_i,e_s,e_1,e_j)&=-\omega (R(e_i,e_s)e_1,e_j)-\omega (e_1,R(e_i,e_s)e_j)\\&={\left\{ \begin{array}{ll} 0, &{}\quad \text {for}\quad s<2k\\ -\omega (S_1e_i,e_j), &{}\quad \text {for}\quad s=2k \end{array}\right. } \end{aligned}$$

thus (62) is true for \(p=1\). Moreover we have that

$$\begin{aligned} R(e_1,e_{2k-2})e_s&= {\left\{ \begin{array}{ll} 0, &{}\quad \text {for}\quad s\in \{1,\ldots ,2k-1\}\setminus \{3\} \\ S_1e_1, &{}\quad \text {for}\quad s=3 \\ -S_1e_{2k-2}, &{}\quad \text {for}\quad s=2k, \end{array}\right. }\\ \end{aligned}$$

and

$$\begin{aligned} R(e_1,e_{2k-2})e_1=R(e_1,e_{2k-2})e_{2k-2}=R(e_1,e_{2k-2})e_i=R(e_1,e_{2k-2})e_j=0. \end{aligned}$$

Now, assume that (62) is true for some \(p\ge 1\). Using the above formulas we easily get

$$\begin{aligned}&R^{p+1}\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,e_s,e_1,e_j)\\&=(R(e_1,e_{2k-2})\cdot R^p\omega )(e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,e_s,e_1,e_j)\\&=-R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,R(e_1,e_{2k-2})e_s,e_1,e_j)\\&= {\left\{ \begin{array}{ll} 0, &{} \text {for }s\in \{1,\ldots ,2k-1\}\setminus \{3\}\\ -R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,S_1e_1,e_1,e_j), &{} \text {for } s=3\\ R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,S_1e_{2k-2},e_1,e_j), &{} \text {for } s=2k \end{array}\right. }\\&= {\left\{ \begin{array}{ll} 0, &{} \text {for }s\in \{1,\ldots ,2k-1\}\setminus \{3\}\\ -R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,\alpha e_1-\beta e_2+e_3,e_1,e_j), &{}\text {for }s=3\\ R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,\beta e_{2k-3}+\alpha e_{2k-2}+e_{2k},e_1,e_j), &{}\text {for } s=2k \end{array}\right. }\\&= {\left\{ \begin{array}{ll} 0, &{} \text {for } s\in \{1,\ldots ,2k-1\}\setminus \{3\}\\ 0,&{}\text {for } s=3\\ R^p\omega (e_1,e_{2k-2},\ldots ,e_1,e_{2k-2},e_i,e_{2k},e_1,e_j),&{}\text {for }s=2k. \end{array}\right. }\\&= {\left\{ \begin{array}{ll} 0, &{}\text {for }s\in \{1,\ldots ,2k-1\}\setminus \{3\}\\ 0, &{}\text {for }s=3\\ -\omega (S_1e_i,e_j), &{}\text {for } s=2k. \end{array}\right. } \end{aligned}$$

Now by the induction principle (62) holds for all \(p\ge 1\). \(\square \)

Lemma 4.5

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(j>2k\), \(p\ge 1\). Then

$$\begin{aligned} R^p\omega (e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j)=-(-\beta )^{p-1}\omega (S_1e_3,e_j). \end{aligned}$$
(63)

Proof

For \(p=1\), by direct computations, we obtain

$$\begin{aligned} R\omega (e_3,e_{2k},e_1,e_j)&=-\omega (R(e_3,e_{2k})e_1,e_j)-\omega (e_1,R(e_3,e_{2k})e_j)\\&=-\omega (S_1e_3,e_j). \end{aligned}$$

Now let us assume that (63) holds for some \(p\ge 1\). Since

$$\begin{aligned} R(e_1,e_{2k-1})e_1&=R(e_1,e_{2k-1})e_3=R(e_1,e_{2k-1})e_{2k-1}=R(e_1,e_{2k-1})e_j=0,\\ R(e_1,e_{2k-1})e_{2k}&=-S_1e_{2k-1}=-\alpha e_{2k-1}+\beta e_{2k} \end{aligned}$$

we compute that

$$\begin{aligned} (R^{p+1}\omega )(e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j)\\&=(R(e_1,e_{2k-1})\cdot R^p\omega )(e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j)\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,-\alpha e_{2k-1}+\beta e_{2k},e_1,e_j)\\&=\alpha R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k-1},e_1,e_j)\\&\quad -\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j). \end{aligned}$$

Since

$$\begin{aligned} R(e_3,e_{2k-1})e_1=R(e_3,e_{2k-1})e_j=0 \end{aligned}$$

one may easily deduce that \(R\omega (e_3,e_{2k-1},e_1,e_j)=0\) and more general

$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k-1},e_1,e_j)=0 \end{aligned}$$

for all \(p\ge 1\). Thus we have

$$\begin{aligned} (R^{p+1}\omega )(e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j)\\&=-\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k},e_1,e_j)\\&=-\beta \cdot (-(-\beta )^{p-1})\omega (S_1e_3,e_j)=-(-\beta )^p\omega (S_1e_3,e_j), \end{aligned}$$

where the last equality follows from (63). Now by the induction principle (63) holds for all \(p\ge 1\). \(\square \)

Lemma 4.6

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(i\in \{1,\ldots ,2k\}\), \(j>2k\). Then for every \(p\ge 1\) we have

$$\begin{aligned}&R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k},e_{2k},e_j)\nonumber \\&\quad = {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ (-\beta )^{p-1}\omega (S_1e_{2k},e_j) \quad \text {for}\quad i=1. \end{array}\right. } \end{aligned}$$
(64)

Proof

First notice that

$$\begin{aligned} R(e_i,e_{2k})e_{2k}&= {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ -S_1e_{2k}, \quad \text {for}\quad i=1 \end{array}\right. }\\ R(e_i,e_{2k})e_j&=0. \end{aligned}$$

From the above we obtain

$$\begin{aligned} R\omega (e_i,e_{2k},e_{2k},e_j)&=-\omega (R(e_i,e_{2k})e_{2k},e_j)-\omega (e_{2k},R(e_i,e_{2k})e_j)\\&={\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ \omega (S_1e_{2k},e_j), \quad \text {for}\quad i=1, \end{array}\right. } \end{aligned}$$

thus (64) is true for \(p=1\). Moreover, we have

$$\begin{aligned} R(e_2,e_{2k})e_i= {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ S_1e_2,\quad \text {for}\quad i=1 \end{array}\right. } \end{aligned}$$

Now let us assume that (64) holds for some \(p\ge 1\). Using the above formulas we obtain

$$\begin{aligned}&(R^{p+1}\omega )(e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k},e_{2k},e_j)\\&\quad =-R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},R(e_2,e_{2k})e_i,e_{2k},e_{2k},e_j)\\&\quad = {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ -R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},S_1e_2,e_{2k},e_{2k},e_j),\quad \text {for}\quad i=1 \end{array}\right. }\\&\quad = {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ -R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},\beta e_1+\alpha e_2+e_4,e_{2k},e_{2k},e_j),\quad \text {for}\quad i=1 \end{array}\right. }\\&\quad = {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ -\beta R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_1,e_{2k},e_{2k},e_j),\quad \text {for}\quad i=1 \end{array}\right. }\\&\quad = {\left\{ \begin{array}{ll} 0, \quad \text {for}\quad i\in \{2,\ldots ,2k\}\\ -\beta \cdot (-\beta )^{p-1}\omega (S_1e_{2k},e_j)=(-\beta )^p\omega (S_1e_{2k},e_j),\quad \text {for}\quad i=1 \end{array}\right. } \end{aligned}$$

Now by the induction principle (64) holds for all \(p\ge 1\). \(\square \)

Now we can prove

Corollary 4.7

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 1\). If \(R^p\omega =0\) for some \(p\ge 1\) then for every \(X\in {\text {span}} \{e_1,\ldots ,e_{2k}\}:=V\) and \(j>2k\)

$$\begin{aligned} \omega (X,e_j)=0. \end{aligned}$$
(65)

Proof

For \(k=1\) Corollary 4.7 is an immediate consequence of Lemma 4.2. For \(k\ge 2\) by Lemmas 4.4, 4.5 and 4.6 we get

$$\begin{aligned} \omega (S_1e_i,e_j)=0 \quad \text {for}\quad i\in \{1,\ldots ,2k\}. \end{aligned}$$

Since \(\det S_1\ne 0\) and \(S_1:V\rightarrow V\) is an isomorphism (so \(\{S_1e_1,\ldots ,S_1e_{2k}\}\) generate V) we obtain that

$$\begin{aligned} \omega (SX,e_j)=0 \end{aligned}$$

for all \(X\in V\). \(\square \)

In the next few lemmas we study intrinsic properties of 2k-dimensional complex Jordan block for \(k\ge 2\).

Lemma 4.8

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(i\in \{1,\ldots ,2k-1\}\setminus \{2\}\). Then for every \(p\ge 1\)

$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k-1})&=0,\end{aligned}$$
(66)
$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k})&=(-\beta )^{p-1}\omega (e_i,S_1e_{2k-1}). \end{aligned}$$
(67)

Proof

The Gauss equation implies that

$$\begin{aligned} R(e_1,e_{2k-1})e_1=R(e_1,e_{2k-1})e_{2k-1}=R(e_1,e_{2k-1})e_i=0. \end{aligned}$$

By straightforward computations we get (66) for every \(p\ge 1\). In order to prove (67) again by the Gauss equation we have

$$\begin{aligned} R(e_1,e_{2k-1})e_{2k}=-S_1e_{2k-1}=-\alpha e_{2k-1}+\beta e_{2k}. \end{aligned}$$

In particular, for \(p=1\), we get

$$\begin{aligned} R\omega (e_1,e_{2k-1},e_i,e_{2k})=\omega (e_i,S_1e_{2k-1}). \end{aligned}$$

Now, assume that the formula (67) is true for some \(p\ge 1\). Then, for \(p+1\), we get

$$\begin{aligned} (R^{p+1}\omega )(e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k})\\&=(R(e_1,e_{2k-1})\cdot R^p\omega )(e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,-S_1e_{2k-1})\\&=R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,\alpha e_{2k-1}-\beta e_{2k})\\&=\alpha R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k-1})\\&\quad -\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k}) \end{aligned}$$

Now, by (66) and the induction principle, the formula (67) holds for every \(p\ge 1\). \(\square \)

Lemma 4.9

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\), \(i\in \{1,\ldots , 2k\}\setminus \{1,2k-1\}\). Then

$$\begin{aligned} R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k})&=0, \end{aligned}$$
(68)
$$\begin{aligned} R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k-1})&=\beta ^{p-1}\omega (e_i,S_1e_{2k}). \end{aligned}$$
(69)

Proof

The Gauss equation implies that

$$\begin{aligned} R(e_2,e_{2k})e_2=R(e_2,e_{2k})e_{2k}=R(e_2,e_{2k})e_i=0. \end{aligned}$$

By straightforward computations we get (68) for every \(p\ge 1\). In order to prove (69) again by the Gauss equation we have

$$\begin{aligned} R(e_2,e_{2k})e_{2k-1}=-S_1e_{2k}=-\beta e_{2k-1}-\alpha e_{2k}. \end{aligned}$$

In particular, for \(p=1\), we get

$$\begin{aligned} R\omega (e_2,e_{2k},e_i,e_{2k-1})=\omega (e_2,S_1e_{2k}). \end{aligned}$$

Now, assume that the formula (69) is true for some \(p\ge 1\). Then, for \(p+1\), we get

$$\begin{aligned} (R^{p+1}\omega )(e_2&,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k-1})\\&=(R(e_2,e_{2k})\cdot R^p\omega )(e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k-1})\\&=-R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,-S_1e_{2k})\\&=R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,\beta e_{2k-1}+\alpha e_{2k})\\&=\alpha R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k}) \\&\quad +\beta R^p\omega (e_2,e_{2k},\ldots ,e_2,e_{2k},e_i,e_{2k-1}) \end{aligned}$$

Now, by (68) and the induction principle, the formula (69) holds for every \(p\ge 1\). \(\square \)

Lemma 4.10

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). If

$$\begin{aligned} \omega (e_j,e_{2k-1})=\omega (e_j,e_{2k})=0 \end{aligned}$$

for \(j\in \{3,\ldots , 2k\}\) then

$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i)}\ldots ,e_1,e_{2k-1},e_1,e_{3})=0 \end{aligned}$$
(70)

for \(i\in \{1,\ldots , p\}\)

Proof

For \(p=1\) we have

$$\begin{aligned} R\omega (e_{2k-1},e_{2k},e_1,e_{3})&=-\omega (R(e_{2k-1},e_{2k})e_1,e_3)-\omega (e_1,R(e_{2k-1},e_{2k})e_3)\\&=-\omega (R(e_{2k-1},e_{2k})e_1,e_3)=-\omega (S_1e_{2k-1},e_3)=0, \end{aligned}$$

since \(\omega (e_3,e_{2k-1})=\omega (e_3,e_{2k})=0\) by assumption.

Assume that (70) holds for some \(p\ge 1\) and for all \(i\in \{1,\ldots , p\}\). Let \(i_0\in \{1,\ldots , p+1\}\). If \(i_0>1\) then we have

$$\begin{aligned} R^{p+1}&\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0)}\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=-R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},R(e_1,e_{2k-1})e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},S_1 e_{2k-1}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},\alpha e_{2k-1}-\beta e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=-\beta R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3}) \end{aligned}$$

since \(R(e_1,e_{2k-1})e_{1}=R(e_1,e_{2k-1})e_{2k-1}=R(e_1,e_{2k-1})e_{3}=0\) and \(R(e_1,e_{2k-1})e_{2k}=-S_1e_{2k-1}=-\alpha e_{2k-1}+\beta e_{2k}\). Now by (70) we obtain that for \(i_0>1\)

$$\begin{aligned} R^{p+1}&\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0)}\ldots ,e_1,e_{2k-1},e_1,e_{3})=0. \end{aligned}$$

If \(i_0=1\) we compute

$$\begin{aligned} R^{p+1}&\omega (\overbrace{e_{2k-1},e_{2k}}^{(1)},e_1,e_{2k-1},\ldots ,\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=-R^{p}\omega (R(e_{2k-1},e_{2k})e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&\quad -R^{p}\omega (e_1,e_{2k-1},R(e_{2k-1},e_{2k})e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&\quad \cdots \\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,R(e_{2k-1},e_{2k})e_1,e_{2k-1},e_1,e_{3})\\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},R(e_{2k-1},e_{2k})e_1,e_{3})\\&=\beta R^{p}\omega (\overbrace{e_{2k},e_{2k-1}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},\overbrace{e_{2k},e_{2k-1}}^{(2)},\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&\quad \cdots \\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,\overbrace{e_{2k},e_{2k-1}}^{(p)},e_1,e_{3})\\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1 e_{2k-1},e_{3})\\&=-R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1 e_{2k-1},e_{3}), \end{aligned}$$

since all terms but last are equal 0 thanks to (70). Now it is enough to show that

$$\begin{aligned} R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1 e_{2k-1},e_{3})=0. \end{aligned}$$

Indeed, by Lemma 4.8 we have

$$\begin{aligned} R^{p}&\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1 e_{2k-1},e_{3})\\&=R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},\alpha e_{2k-1}-\beta e_{2k},e_{3})\\&=-\alpha R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_{3},e_{2k-1})\\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_{3},e_{2k})\\&=\beta \cdot (-\beta )^{p-1}\omega (e_3,S_1 e_{2k-1})=0, \end{aligned}$$

where the last equality follows from the assumption that \(\omega (e_3,e_{2k-1})=\omega (e_3,e_{2k})=0\). Summarising, we have shown that

$$\begin{aligned} R^{p+1}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0)}\ldots ,e_1,e_{2k-1},e_1,e_{3})=0 \end{aligned}$$

for all \(i_0\in \{1,\ldots ,p+1\}\). Now by the induction principle (70) holds for all \(p\ge 1\). \(\square \)

Lemma 4.11

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). If \(\omega (e_3,e_{2k-1})=\omega (e_3,e_{2k})=0\) then

$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})&=\beta ^{p-1}(-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1})) \end{aligned}$$
(71)
$$\begin{aligned} R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k})&=\alpha \beta ^{p-2}(-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1}))\nonumber \\&-\alpha ^2(-\beta )^{p-2}\omega (e_1,e_{2k-1})-\alpha (-\beta )^{p-1}\omega (e_1,e_{2k}) \end{aligned}$$
(72)

Proof

For \(p=1\) we compute

$$\begin{aligned} R\omega (e_1,e_{2k-1},e_2,e_{2k-1})&=-\omega (R(e_1,e_{2k-1})e_2,e_{2k-1})-\omega (e_2,R(e_1,e_{2k-1})e_{2k-1})\\&=-\omega (\alpha e_1-\beta e_2+e_3,e_{2k-1})\\&=-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1}), \end{aligned}$$

since by assumption \(\omega (e_3,e_{2k-1})=0\). Now, assume that the formula (71) is true for some \(p\ge 1\). Then, for \(p+1\), we get

$$\begin{aligned} (R^{p+1}\omega )(e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})\\&=(R(e_1,e_{2k-1})\cdot R^p\omega )(e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},R(e_1,e_{2k-1})e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1e_1,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},\alpha e_1-\beta e_2+e_3,e_{2k-1})\\&=-\alpha R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2k-1})\\&\quad +\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})\\&\quad -R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_3,e_{2k-1})\\&=\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1}) \end{aligned}$$

where the last equality follows from Lemma 4.8 (formula (66), \(i=1,3\)). Now, using (71) we obtain

$$\begin{aligned} (R^{p+1}\omega )(e_1&,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})\\&=\beta R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1})\\&=\beta (\beta ^{p-1}(-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1})))\\&=\beta ^p (-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1})). \end{aligned}$$

By the induction principle (71) holds for all \(p\ge 1\).

The formula (72) can be easily obtained in a similar way using (67), (71) and the principle of induction. \(\square \)

From Lemma 4.11 we immediately get

Corollary 4.12

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). If \(\omega (e_3,e_{2k-1})=\omega (e_3,e_{2k})=0\) then

$$\begin{aligned}&R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,S_1e_{2k-1}) \nonumber \\&\quad =(-1)^p\beta ^{p-1}\alpha (\alpha \omega (e_1,e_{2k-1})-\beta \omega (e_1,e_{2k})) \end{aligned}$$
(73)

Lemma 4.13

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). If \(\omega (e_3,e_{2k-1})=\omega (e_3,e_{2k})=0\) then for \(i\in \{1,\ldots ,p\}\)

$$\begin{aligned} R^p&\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\nonumber \\&= {\left\{ \begin{array}{ll} (-\beta )^{p-1}(-\omega (S_1 e_{2k-1},e_2)+\omega (e_1,S_1 e_{2k})), &{} \text {if } \quad i=1 \\ 0, &{} \text {if}\quad i>1. \end{array}\right. } \end{aligned}$$
(74)

Proof

We directly check that

$$\begin{aligned} R\omega (e_{2k-1},e_{2k},e_1,e_{2})&=-\omega (R(e_{2k-1},e_{2k})e_1,e_2)-\omega (e_1,R(e_{2k-1},e_{2k})e_2)\\&=-\omega (S_1 e_{2k-1},e_2)+\omega (e_1,S_1 e_{2k}), \end{aligned}$$

so (74) is true for \(p=1\). Now let us assume that (74) is true for some \(p\ge 1\) and for all \(i\in \{1,\ldots ,p\}\). Let \(i_0\in \{1,\ldots ,p+1\}\).

When \(i_0>1\) using the fact that \(R(e_1,e_{2k-1})e_1=R(e_1,e_{2k-1})e_{2k-1}=0\) we get

$$\begin{aligned} R^{p+1}&\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&=-R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},R(e_1,e_{2k-1})e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,R(e_1,e_{2k-1})e_{2})\\&=R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},S_1 e_{2k-1}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,S e_{1})\\&=-\beta R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3})\\&=-R^{p}\omega (e_1,e_{2k-1},\ldots ,\overbrace{e_{2k-1},e_{2k}}^{(i_0-1)}\ldots ,e_1,e_{2k-1},e_1,e_{3})=0 \end{aligned}$$

where the last equality follows from Lemma 4.10.

If \(i_0=1\), using the fact that \(R(e_{2k-1},e_{2k})e_{2k-1}=0\) we obtain

$$\begin{aligned} R^{p+1}&\omega (\overbrace{e_{2k-1},e_{2k}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&=-R^{p}\omega (R(e_{2k-1},e_{2k})e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},R(e_{2k-1},e_{2k})e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad \cdots \\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,R(e_{2k-1},e_{2k})e_1,e_{2k-1},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},R(e_{2k-1},e_{2k})e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,R(e_{2k-1},e_{2k})e_{2})\\&=\beta R^{p}\omega (\overbrace{e_{2k},e_{2k-1}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},\overbrace{e_{2k},e_{2k-1}}^{(2)},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad \cdots \\&\quad +\beta R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,\overbrace{e_{2k},e_{2k-1}}^{(p)},e_1,e_{2})\\&\quad -R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1 e_{2k-1},e_{2})\\&\quad +R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,S_1 e_{2k})\\&=-\beta R^{p}\omega (\overbrace{e_{2k-1},e_{2k}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&\quad +R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_{2},S_1 e_{2k-1})\\&\quad +R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,S_1 e_{2k}),\\ \end{aligned}$$

since all but first and the last two terms are equal to 0 by (74). Now Lemma 4.8 and Corollary 4.12 imply that

$$\begin{aligned}&R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,S_1 e_{2k})\\&\quad +R^{p}\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_{2},S_1 e_{2k-1})\\&=\alpha (-\beta )^{p-1}\omega (e_1,S_1e_{2k-1})+(-1)^p\beta ^{p-1}\alpha (\alpha \omega (e_1,e_{2k-1})-\beta \omega (e_1,e_{2k}))\\&=0. \end{aligned}$$

In consequence, using (74) we get

$$\begin{aligned} R^{p+1}&\omega (\overbrace{e_{2k-1},e_{2k}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&=-\beta R^{p}\omega (\overbrace{e_{2k-1},e_{2k}}^{(1)},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2})\\&=(-\beta )^{p}(-\omega (S_1 e_{2k-1},e_2)+\omega (e_1,S_1 e_{2k})). \end{aligned}$$

Now the thesis follows from the induction principle. \(\square \)

To simplify further notation let us denote:

$$\begin{aligned} A_{ij}^p:=R^p\omega (e_1,e_{2k-1},e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_i,e_{2k-1},e_j,e_{2k-1}) \end{aligned}$$
(75)

where \(i,j\in \{1,2,3\}\) and \(p\ge 1\). We have the following lemma:

Lemma 4.14

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). Then

$$\begin{aligned} A_{11}^p&=A_{13}^p=A_{31}^p=A_{33}^p=0, \end{aligned}$$
(76)
$$\begin{aligned} A_{12}^{p+1}&=\beta A_{12}^{p}, \end{aligned}$$
(77)
$$\begin{aligned} A_{21}^{p+1}&=\beta A_{21}^{p}, \end{aligned}$$
(78)
$$\begin{aligned} A_{23}^{p+1}&=\beta A_{23}^{p}, \end{aligned}$$
(79)
$$\begin{aligned} A_{32}^{p+1}&=\beta A_{32}^{p}, \end{aligned}$$
(80)
$$\begin{aligned} A_{22}^{p+1}&=-\alpha (A_{12}^{p}+A_{21}^{p})+2\beta A_{22}^{p}-(A_{32}^{p}+A_{23}^{p}). \end{aligned}$$
(81)

Proof

In order to prove (76) first note that

$$\begin{aligned} A_{11}^p&=A_{13}^p=0 \end{aligned}$$

immediately follows from Lemma 4.8, formula (66). By the Gauss equation we have

$$\begin{aligned} R(e_3&,e_{2k-1})e_1=R(e_3,e_{2k-1})e_{2k-1}=R(e_3,e_{2k-1})e_3\\&=R(e_1,e_{2k-1})e_{1}=R(e_1,e_{2k-1})e_{2k-1}=R(e_1,e_{2k-1})e_{3}=0. \end{aligned}$$

Using the above equalities we easy obtain that

$$\begin{aligned} A_{31}^p=A_{33}^p=0. \end{aligned}$$

To prove (77) we compute

$$\begin{aligned} A_{12}^{p+1}&=R^{p+1}\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2k-1},e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2k-1},R(e_1,e_{2k-1})e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2k-1},S_1e_1,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_1,e_{2k-1},\alpha e_1-\beta e_2+e_3,e_{2k-1}),\\&=-\alpha A_{11}^{p}+\beta A_{12}^{p}-A_{13}^{p}=\beta A_{12}^{p}, \end{aligned}$$

where the last equality follows from (76). The formulas (78)–(80) we prove in a similar way like (77). To prove (81) we compute

$$\begin{aligned} A_{22}^{p+1}&=R^{p+1}\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1},e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},R(e_1,e_{2k-1})e_2,e_{2k-1},e_2,e_{2k-1})\\&\quad -R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},R(e_1,e_2,e_{2k-1},R(e_1,e_{2k-1})e_2,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},S_1e_1,e_{2k-1},e_2,e_{2k-1})\\&\quad -R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1},S_1e_1,e_{2k-1})\\&=-R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},\alpha e_1-\beta e_2+e_3,e_{2k-1},e_2,e_{2k-1})\\&\quad -R^p\omega (e_1,e_{2k-1},\ldots ,e_1,e_{2k-1},e_2,e_{2k-1},\alpha e_1-\beta e_2+e_3,e_{2k-1},e_2,e_{2k-1})\\&=-\alpha A_{12}^{p}+\beta A_{22}^{p}-A_{32}^{p}-\alpha A_{21}^{p}+\beta A_{22}^{p}-A_{23}^{p}\\&=-\alpha ( A_{12}^{p}+ A_{21}^{p})+2\beta A_{22}^{p}-(A_{32}^{p}+A_{23}^{p}). \end{aligned}$$

\(\square \)

From the above lemma we obtain:

Corollary 4.15

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\), \(p\ge 1\). If

$$\begin{aligned} \omega (e_j,e_{2k-1})=\omega (e_j,e_{2k})=0 \end{aligned}$$
(82)

for \(j\in \{3,\ldots , 2k\}\) then

$$\begin{aligned} A_{12}^p&=\beta ^{p-1}(-\alpha \omega (e_1,e_{2k-1})+\beta \omega (e_2,e_{2k-1})), \end{aligned}$$
(83)
$$\begin{aligned} A_{21}^p&=\beta ^{p-1}(\alpha \omega (e_1,e_{2k-1})-\beta \omega (e_1,e_{2k})), \end{aligned}$$
(84)
$$\begin{aligned} A_{22}^{p+1}&=-\alpha \beta ^p(\omega (e_2,e_{2k-1})-\omega (e_1,e_{2k}))+2\beta A_{22}^{p}. \end{aligned}$$
(85)

Proof

From formulas (77), (78) and assumption (82) we obtain

$$\begin{aligned} A_{12}^p&=\beta ^{p-1}\cdot A_{12}^1=\beta ^{p-1}\cdot R \omega (e_1,e_{2k-1},e_2,e_{2k-1})\\&=-\beta ^{p-1}\omega (R(e_1,e_{2k-1})e_2,e_{2k-1})=-\beta ^{p-1}\omega (S_1e_1,e_{2k-1})\\&=-\alpha \beta ^{p-1}\omega (e_1,e_{2k-1})+\beta ^p\omega (e_2,e_{2k-1}).\\ A_{21}^p&=\beta ^{p-1}\cdot A_{21}^1=\beta ^{p-1}\cdot R \omega (e_2,e_{2k-1},e_1,e_{2k-1})\\&=-\beta ^{p-1}\omega (e_1,R(e_2,e_{2k-1})e_{2k-1})=\beta ^{p-1}\omega (e_1,S_1e_{2k-1})\\&=\alpha \beta ^{p-1}\omega (e_1,e_{2k-1})-\beta ^p\omega (e_1,e_{2k}), \end{aligned}$$

what proves (83) and (84). Using (83) and (84) we obtain

$$\begin{aligned} A_{12}^p+A_{21}^p=\beta ^p (\omega (e_2,e_{2k-1})-\omega (e_1,e_{2k})). \end{aligned}$$

Moreover from formulas (79), (80) and (82) we get

$$\begin{aligned} A_{32}^p&=\beta ^{p-1}A_{32}^1=\beta ^{p-1}R \omega (e_3,e_{2k-1},e_2,e_{2k-1})\\&=-\beta ^{p-1}\omega (R(e_3,e_{2k-1})e_2,e_{2k-1})=-\beta ^{p-1}\omega (S_1e_3,e_{2k-1})=0,\\ A_{23}^p&=\beta ^{p-1}\cdot A_{23}^1=0. \end{aligned}$$

Now (85) immediately follows from (81). \(\square \)

Lemma 4.16

Let \(S_1\) be a 2k-dimensional complex Jordan block, \(k\ge 2\). If \(R^p\omega =0\) for some \(p\ge 1\) then for \(i\in \{3,\ldots , 2k\}\)

$$\begin{aligned} \omega (e_i,e_{2k-1})=\omega (e_i,e_{2k})=0. \end{aligned}$$

Proof

From Lemma 4.8, for \(i=3,\ldots ,2k-1\) we have

$$\begin{aligned} \omega (e_i,S_1e_{2k-1})=0. \end{aligned}$$
(86)

In particular for \(i=2k-1\) we obtain

$$\begin{aligned} 0=\omega (e_{2k-1},S_1e_{2k-1})=\omega (e_{2k-1},\alpha e_{2k-1}-\beta e_{2k})=-\beta \omega (e_{2k-1},e_{2k}). \end{aligned}$$

Thus

$$\begin{aligned} \omega (e_{2k-1},e_{2k})=0, \end{aligned}$$

since \(\beta \ne 0\). Now we have

$$\begin{aligned} \omega (e_{2k},S_1e_{2k-1})=\omega (e_{2k},\alpha e_{2k-1}-\beta e_{2k})=\alpha \omega (e_{2k},e_{2k-1})=0. \end{aligned}$$

That is (86) is also true for \(i=2k\). From Lemma 4.9 for \(i\in \{3,\ldots ,2k\}\setminus \{2k-1\}\) we have

$$\begin{aligned} \omega (e_i,S_1e_{2k})=0. \end{aligned}$$
(87)

Since \(\omega (e_{2k-1},e_{2k})=0\) we also get that

$$\begin{aligned} \omega (e_{2k-1},S_1e_{2k})=\omega (e_{2k-1},\beta e_{2k-1}+\alpha e_{2k})=\alpha \omega (e_{2k-1},e_{2k})=0. \end{aligned}$$

That is (87) also valid for \(i=2k-1\). Now from (86) and (87), for \(i\in \{3,\ldots ,2k\}\) we get

$$\begin{aligned} 0&=\alpha \omega (e_i,S_1e_{2k-1})+\beta \omega (e_i,S_1e_{2k})\\&=\omega (e_i,\alpha \cdot (\alpha e_{2k-1}-\beta e_{2k})+\beta \cdot (\beta e_{2k-1}+\alpha e_{2k}))\\&=(\alpha ^2+\beta ^2)\omega (e_i,e_{2k-1}). \end{aligned}$$

In a similar way we compute that

$$\begin{aligned} 0&=-\beta \omega (e_i,S_1e_{2k-1})+\alpha \omega (e_i,S_1e_{2k})\\&=(\alpha ^2+\beta ^2)\omega (e_i,e_{2k}). \end{aligned}$$

Since \(\alpha ^2+\beta ^2\ne 0\) the above equations imply that

$$\begin{aligned} \omega (e_i,e_{2k-1})=\omega (e_i,e_{2k})=0 \end{aligned}$$

for \(i\in \{3,\ldots ,2k\}\). \(\square \)

Now we are ready to prove that the decomposition from the Lemma 2.4 cannot contain complex Jordan blocks. Namely we have the following:

Theorem 4.17

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(R^p\omega =0\) for some \(p\ge 1\) and

$$\begin{aligned} S=\left[ \begin{matrix} S_1 &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ 0 &{}\quad S_2 &{}\quad \ldots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad S_{q+r} \end{matrix}\right] \end{aligned}$$
(88)

is the Jordan decomposition of S as stated in the Lemma 2.4 then (35) does not contain complex Jordan blocks (that is \(r=0)\).

Proof

Let \(\{e_1,\ldots ,e_{2n}\}\) be the basis of \(T_xM\) from Lemma 2.4. Without loss of generality, as described at the beginning of this section, we can change order of \(S_i\) and \(H_i\) in such way that \(S_{1},\ldots ,S_{r}\) will be complex blocks and \(S_{r+1},\ldots ,S_{r+q}\) will be real blocks. Moreover, we can assume that \(\dim S_1\ge \dim S_i\) for \(i=2,\ldots ,r\).

First assume that \(S_1\) is a complex block of dimension 2k and \(k\ge 2\). By Lemma 4.16 we have that \(\omega (e_i,e_{2k-1})=\omega (e_i,e_{2k})=0\) for \(i\in \{3,\ldots , 2k\}\). Now from Corollary 4.15 (formulas (83) and (84)) we get

$$\begin{aligned} \omega (e_2,e_{2k-1})=\frac{\alpha }{\beta } \omega (e_1,e_{2k-1}) \end{aligned}$$
(89)

and

$$\begin{aligned} \omega (e_1,e_{2k})= \frac{\alpha }{\beta }\omega (e_1,e_{2k-1}), \end{aligned}$$
(90)

since \(R^p\omega =0\) and \(\beta \ne 0\). In particular \(\omega (e_2,e_{2k-1})=\omega (e_1,e_{2k})\) and (85) simplify to the form

$$\begin{aligned} A_{22}^{p+1}=2\beta A_{22}^{p}. \end{aligned}$$

Now one can easily find explicit formula for \(A_{22}^{p}\). Namely we have

$$\begin{aligned} A_{22}^{p}=2^{p-1}\beta ^{p-1}A_{22}^{1}&=2^{p-1}\beta ^{p-1}R\omega (e_2,e_{2k-1},e_2,e_{2k-1})\\&=2^{p-1}\beta ^{p-1}(-\omega (S_1 e_2,e_{2k-1})+\omega (e_2,S_1 e_{2k-1}))\\&=2^{p-1}\beta ^{p-1}(-\beta \omega (e_1,e_{2k-1})-\alpha \omega (e_2,e_{2k-1})\\&\quad +\alpha \omega (e_2,e_{2k-1})-\beta \omega (e_2,e_{2k}))\\&=-2^{p-1}\beta ^{p}(\omega (e_1,e_{2k-1})+\omega (e_2,e_{2k})). \end{aligned}$$

On the other hand we have \(A_{22}^{p}=0\) (since \(R^p\omega =0\)) and in consequence

$$\begin{aligned} \omega (e_2,e_{2k})=-\omega (e_1,e_{2k-1}). \end{aligned}$$
(91)

From Lemma 4.13 (formula (74)) we obtain

$$\begin{aligned} \omega (S_1 e_{2k-1},e_2)-\omega (e_1,S_1 e_{2k})=0 \end{aligned}$$

that is

$$\begin{aligned} \alpha \omega (e_{2k-1},e_2)-\beta \omega (e_{2k},e_2)-\beta \omega (e_1,e_{2k-1})-\alpha \omega (e_1,e_{2k})=0. \end{aligned}$$

Now using (89)–(91) the above implies that

$$\begin{aligned} -\alpha \cdot \frac{\alpha }{\beta } \omega (e_1,e_{2k-1})-\beta \omega (e_1,e_{2k-1})-\beta \omega (e_1,e_{2k-1})-\alpha \cdot \frac{\alpha }{\beta }\omega (e_1,e_{2k-1})\\ =-2\left( \frac{\alpha ^2}{\beta }+\beta \right) \omega (e_1,e_{2k-1})=-\frac{2}{\beta }(\alpha ^2+\beta ^2)\omega (e_1,e_{2k-1})=0. \end{aligned}$$

In this way we have shown that \(\omega (e_1,e_{2k-1})=0\) and in consequence also \(\omega (e_2,e_{2k-1})=0\). Hence

$$\begin{aligned} \omega (e_i,e_{2k-1})=0 \end{aligned}$$

for \(i\in \{1,\ldots , 2k\}\). From Corollary 4.7 we also have that

$$\begin{aligned} \omega (e_i,e_{2k-1})=0 \end{aligned}$$

for \(i>2k\), that is \(\omega \) is degenerate, what leads to contradiction and we must have \(k<2\). In this way we have shown that if Jordan decomposition of S contains some complex Jordan blocks they all must be 2-dimensional.

It remained to show that also 2-dimensional complex Jordan blocks are not possible. In order to prove it let us assume that \(S_1\) is a 2-dimensional complex block. Since \(R^p\omega =0\) then also \(R^{2p}\omega =0\) and Lemma 4.2 implies that \(\omega (e_1,e_{l})=0\) for \(l=3,\ldots ,2n\). Now observe that since \(\dim M\ge 4\) there must exist \(i_0,j_0>2\) such that \(h(e_{i_0},e_{j_0})\ne 0\) (otherwise h would be degenerate). Now from Lemma 4.3 we have

$$\begin{aligned} 2^{2p-1}\beta ^2(\det S_1)^{p-1}h(e_{i_0},e_{j_0})\omega (e_1,e_2)=0. \end{aligned}$$

That is

$$\begin{aligned} \omega (e_1,e_2)=0, \end{aligned}$$

since \(h(e_{i_0},e_{j_0})\ne 0\). In this way we have shown that \(\omega \) is degenerate since \(\omega (e_1,e_{l})=0\) for \(l=1,\ldots ,2n\), what leads us again to contradiction. \(\square \)

5 Main Results

Before we proceed with main results of this paper we need to recall the following two lemmas:

Lemma 5.1

([19]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) be a non-degenerate affine hypersurface (\(\dim M\ge 4\)) with a transversal vector field \(\xi \) and an almost symplectic form \(\omega \). Let \(x\in M\). If there exist a natural number \(2 \le s\le 2n\) and a basis \(\{e_1,\ldots ,e_{2n}\}\) of \(T_xM\) such that \(h(e_i,e_j)=\varepsilon _i\delta _{ij}\), \(\varepsilon _i=\pm 1\) for \(i,j =1,\ldots , s\), \(h(e_i,e_j)=0\) for \(i=1,\ldots , s\), \(j=s+1,\ldots , 2n\) and \(Se_i=\lambda _i e_i\) for \(i=1,\ldots , s\), \(\lambda _i\in \mathbb {R}\). Then for every \(k, j=1,2,\ldots ,s\), \(k\ne j\) and for every \(i=1,\ldots ,2n\), \(i\ne j\), \(i\ne k\) we have

$$\begin{aligned} R^{2l}\omega (e_k,e_j,e_k,e_j,\ldots ,e_k,e_j,e_k,e_i)=(-1)^l\varepsilon _k^l\varepsilon _j^l \lambda _k^l\lambda _j^l\omega (e_k,e_i) \end{aligned}$$
(92)

for every \(l\in \mathbb {N}\).

Lemma 5.2

([19]). Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) be a non-degenerate affine hypersurface (\(\dim M\ge 4\)) with a transversal vector field \(\xi \) and an almost symplectic form \(\omega \). Let \(x\in M\) and let \(X, Y, Z_1, Z_2\) be vector fields from \(T_xM\) such that \(SX=\lambda X\), \(SZ_1=0\), \(SZ_2=0\), \(h(Z_1,Z_2)=0\) and \(h(Y,Z_2)=0\). Then, for every \(l\ge 1\) we have

$$\begin{aligned} R^{2l}\omega (X&,\underbrace{Z_1,Z_2,Z_1,Z_2,\ldots ,Z_1,Z_2}_{4l},Y)\nonumber \\&=(-1)^l\lambda ^{2l}h(Z_1,Z_1)^lh(Z_2,Z_2)^l\omega (X,Y). \end{aligned}$$
(93)

We will need also below three lemmas.

Lemma 5.3

Let \(\{e_1,e_2,e_3\}\) be vectors such that \(Se_1=\alpha e_1+e_2\), \(Se_2=\alpha e_2\), \(Se_3=\lambda e_3\) and \(h(e_1,e_1)=h(e_2,e_2)=h(e_1,e_3)=h(e_2,e_3)=0\), \(h(e_1,e_2)=\eta \), \(h(e_3,e_3)=\varepsilon \), \(\eta \ne 0, \varepsilon \ne 0\). Then for every \(p\ge 1\) we have

$$\begin{aligned} R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)=(-1)^p\cdot (2\eta \alpha )^{p-1}\varepsilon \omega (e_1,e_2). \end{aligned}$$
(94)

Proof

By straightforward computations we obtain

$$\begin{aligned} R\omega (e_1,e_3,e_1,e_3)&=-\varepsilon \omega (e_1,e_2) \end{aligned}$$
(95)
$$\begin{aligned} R\omega (e_1,e_3,e_2,e_3)&=\varepsilon \alpha \omega (e_1,e_2) \end{aligned}$$
(96)
$$\begin{aligned} R\omega (e_2,e_3,e_1,e_3)&=-\varepsilon \alpha \omega (e_1,e_2) \end{aligned}$$
(97)
$$\begin{aligned} R\omega (e_2,e_3,e_2,e_3)&=0. \end{aligned}$$
(98)

Now for \(p\ge 1\) and \(i,j\in \{1,2\}\) we get

$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_i,e_3,e_j,e_3)\\&=-R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,R(e_1,e_2)e_i,e_3,e_j,e_3)\\&\quad -R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_i,e_3,R(e_1,e_2)e_j,e_3)\\&=-\alpha h(e_2,e_i)R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_j,e_3)\\&\quad +(\alpha h(e_1,e_i)-h(e_2,e_i))R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_j,e_3)\\&\quad -\alpha h(e_2,e_j)R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_i,e_3,e_1,e_3)\\&\quad +(\alpha h(e_1,e_j)-h(e_2,e_j))R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_i,e_3,e_2,e_3). \end{aligned}$$

Using different configurations of i and j we obtain the following four relations:

$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_2,e_3)\nonumber \\&=2\alpha h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_2,e_3) \end{aligned}$$
(99)
$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_2,e_3)\nonumber \\&=-h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_2,e_3) \end{aligned}$$
(100)
$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_3)\nonumber \\&=-h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_2,e_3) \end{aligned}$$
(101)
$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)\nonumber \\&=-2\alpha h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)\nonumber \\&\quad -h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_3)\nonumber \\&\quad -h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_2,e_3). \end{aligned}$$
(102)

Now from (98) to (99), by the induction principle we easily obtain that

$$\begin{aligned} R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_2,e_3)=0 \end{aligned}$$

for all \(p\ge 1\). In consequence from (100) to (101) we get that also

$$\begin{aligned} R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_2,e_3)=0 \end{aligned}$$

and

$$\begin{aligned} R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_2,e_3,e_1,e_3)=0 \end{aligned}$$

for all \(p\ge 2\). Now (102) simplify to the form

$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)\nonumber \\&=-2\alpha h(e_1,e_2) R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3) \end{aligned}$$
(103)

for \(p\ge 2\). Note that using (95)–(97) one may easily show that (103) is also true for \(p=1\). From (95) we see that (94) holds for \(p=1\). Assume now that (94) is true for some \(p\ge 1\). From (103) we compute that

$$\begin{aligned} R^{p+1}\omega&(e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)\\&=-2\alpha h(e_1,e_2)\cdot (-1)^p\cdot (2\eta \alpha )^{p-1}\varepsilon \omega (e_1,e_2)\\&=(-1)^{p+1}\cdot (2\eta \alpha )^{p}\varepsilon \omega (e_1,e_2). \end{aligned}$$

Now by the induction principle (94) is true for all \(p\ge 1\). \(\square \)

Lemma 5.4

Let \(\{e_1,e_2,\ldots ,e_{2n}\}\) be vectors such that \(Se_1=e_2\), \(Se_2=0\), \(h(e_1,e_1)=h(e_2,e_2)=0\), \(h(e_1,e_2)=\eta \) and \(Se_i=\lambda _{i-2} e_i\), \(h(e_1,e_i)=h(e_2,e_i)=0\), \(h(e_i,e_i)=\varepsilon _{i-2}\) for \(i=3,\ldots ,2n\) where \(\eta \ne 0, \varepsilon \ne 0\). Then for every \(p\ge 1\) and for every \(i=3,\ldots ,2n\) we have

$$\begin{aligned} R^p\omega (e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})=-\eta ^p{\lambda _1}^p\omega (e_i,e_{3}). \end{aligned}$$
(104)

Proof

First we shall show that for every \(p\ge 1\) and for every \(i=3,\ldots ,2n\) the following formula holds:

$$\begin{aligned} R^p\omega (e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})=0. \end{aligned}$$
(105)

For \(p=1\) we have

$$\begin{aligned} R\omega (e_{3},e_{2},e_i,e_{2})=-\omega (R(e_{3},e_{2})e_i,e_{2})-\omega (e_i,R(e_{3},e_{2})e_{2})=0, \end{aligned}$$

since

$$\begin{aligned} R(e_{3},e_{2})e_i=0, \quad R(e_{3},e_{2})e_{2}=0. \end{aligned}$$
(106)

Now assume that (105) holds for some \(p\ge 1\), we shall show that it also holds for \(p+1\). From the Gauss equation we have

$$\begin{aligned} R(e_{3},e_{2})e_{1}=\eta Se_{3}=\eta \lambda _1e_{3}. \end{aligned}$$
(107)

Now, using (106) and (107) we obtain

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&=-R^{p}\omega (\eta \lambda _1e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -R^{p}\omega (e_{1},e_{2},\eta \lambda _1e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -\ldots \\&\quad -R^{p}\omega (e_{1},e_{2},e_{1},e_{2},\ldots ,\eta \lambda _1e_{3},e_{2},e_i,e_{2})\\&=-\eta \lambda _1R^{p}\omega (\overbrace{e_{3},e_{2}}^{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -\eta \lambda _1R^{p}\omega (e_{1},e_{2},\overbrace{e_{3},e_{2}}^{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -\ldots \\&\quad -\eta \lambda _1R^{p}\omega (e_{1},e_{2},e_{1},e_{2},\ldots ,\overbrace{e_{3},e_{2}}^{p},e_i,e_{2})\\&=-\eta \lambda _1R^{p}\omega (e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2}), \end{aligned}$$

since all terms but first are equal 0 due to the following identities

$$\begin{aligned} R(e_{1},e_{2})e_{3}&=R(e_{1},e_{2})e_i=R(e_{1},e_{2})e_{2}=0,\\ R(e_{1},e_{2})e_{1}&=\eta Se_{1}=\eta e_{2}. \end{aligned}$$

Now by the induction principle (105) is true for all \(p\ge 1\).

Now we can prove (104). First we check that (104) is satisfied for \(p=1\) and \(p=2\). Indeed, since

$$\begin{aligned} R(e_{3},e_{1})e_i=-h(e_{3},e_i)e_{2}, \quad R(e_{3},e_{1})e_{2}=\eta \lambda _1e_{3}, \quad R(e_{3},e_{1})e_{1}=0, \end{aligned}$$
(108)
$$\begin{aligned} R(e_{1},e_{2})e_{3}=R(e_{1},e_{2})e_{2}=R(e_{1},e_{2})e_i=0,\quad R(e_{1},e_{2})e_{1}=\eta e_{2} \end{aligned}$$
(109)

by straightforward computations we easily obtain that

$$\begin{aligned} R\omega&(e_{3},e_{1},e_i,e_{2})=-\omega (R(e_{3},e_{1})e_i,e_{2})-\omega (e_i,R(e_{3},e_{1})e_{2})\\&=h(e_{3},e_i)\omega (e_{2},e_{2})-\eta \lambda _1\omega (e_i,e_{3})=-\eta \lambda _1\omega (e_i,e_{3}) \end{aligned}$$

and

$$\begin{aligned} R^2\omega&(e_{3},e_{1},e_{1},e_{2},e_i,e_{2})\\&=-R\omega (R(e_{3},e_{1})e_{1},e_{2},e_i,e_{2})-R\omega (e_{1},R(e_{3},e_{1})e_{2},e_i,e_{2})\\&\quad -R\omega (e_{1},e_{2},R(e_{3},e_{1})e_i,e_{2})-R\omega (e_{1},e_{2},e_i,R(e_{3},e_{1})e_{2})\\&=-R\omega (e_{1},\eta \lambda _1e_{3},e_i,e_{2})-R\omega (e_{1},e_{2},e_i,\eta \lambda _1e_{3})\\&=\eta \lambda _1(-\eta \lambda _1\omega (e_i,e_{3}))-0=-\eta ^2\lambda _{1}^2\omega (e_i,e_{3}). \end{aligned}$$

Let us assume that (104) holds for some \(p\ge 2\). Then using (108) we compute

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&=-R^{p}\omega (e_{1},\eta \lambda _1e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -R^{p}\omega (e_{1},e_{2},e_{1},\eta \lambda _1e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad \ldots \\&\quad -R^{p}\omega (e_{1},e_{2},e_{1},e_{2},\ldots ,e_{1},\eta \lambda _1e_{3},e_i,e_{2})\\&\quad -R^{p}\omega (e_{1},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},-h(e_{3},e_i)e_{2},e_{2})\\&\quad -R^{p}\omega (e_{1},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,\eta \lambda _1e_{3})\\&=-\eta \lambda _1R^{p}\omega (\overbrace{e_{1},e_{3}}^{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad -\eta \lambda _1R^{p}\omega (e_{1},e_{2},\overbrace{e_{1},e_{3}}^{2},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad \ldots \\&\quad -\eta \lambda _1R^{p}\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},\overbrace{e_{1},e_{3}}^{p},e_i,e_{2})\\&\quad -\eta \lambda _1R^{p}\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{3}). \end{aligned}$$

From (109) it follows that for \(j=2,\ldots ,p\) and \(i\in \{1,\ldots ,2n\}\setminus \{2\}\)

$$\begin{aligned} R^{p}\omega&(e_{1},e_{2},\ldots ,\overbrace{e_{1},e_{3}}^{j},\ldots ,e_{1},e_{2},e_i,e_{2})\nonumber \\&=-R^{p-1}\omega (e_{1},e_{2},\ldots ,\overbrace{\eta e_{2},e_{3}}^{j-1},\ldots ,e_{1},e_{2},e_i,e_{2}). \end{aligned}$$
(110)

We also have

$$\begin{aligned} R^{p}\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{3})=0 \end{aligned}$$

for \(i=3,2,\ldots ,2n\) thanks to (109) and since \(p\ge 2\). Now we obtain

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&=\eta \lambda _1R^{p}\omega (e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad +\eta \lambda _1R^{p-1}\omega (\overbrace{\eta e_{2},e_{3}}^{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad \ldots \\&\quad +\eta \lambda _1R^{p-1}\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},\overbrace{\eta e_{2},e_{3}}^{p-1},e_i,e_{2})\\&=\eta \lambda _1R^{p}\omega (e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&\quad +\eta ^2\lambda _1R^{p-1}\omega (e_{2},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2})\\&=\eta \lambda _1R^{p}\omega (e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2},e_i,e_{2}), \end{aligned}$$

where the last two equalities follow from (109) to (105) respectively. By the induction principle (104) is true for all \(p\ge 1\). \(\square \)

Lemma 5.5

Let \(\{e_1,e_2,\ldots ,e_{2n}\}\) be vectors with properties like in the Lemma 5.4. Then for every \(p\ge 1\) we have

$$\begin{aligned} R^{p}\omega (e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2})&=(-1)^p\eta ^p\lambda _1^p\omega (e_{3},e_{2})\end{aligned}$$
(111)
$$\begin{aligned} R^{p}\omega (e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2})&=-\eta ^p\lambda _1^p\omega (e_{1},e_{3})\nonumber \\&\quad +\frac{1}{2}\Big ((-1)^p+1\Big )\eta ^p\lambda _1^{p-1}\omega (e_{2},e_{3}). \end{aligned}$$
(112)

Proof

Since basis \(\{e_{1},\ldots ,e_{2n}\}\) satisfy conditions of Lemma 5.4. in particular we have (109) and (110). We also have

$$\begin{aligned} R(e_{3},e_{2})e_{1}=\eta \lambda _1e_{3}, \quad R(e_{3},e_{2})e_{2}=0. \end{aligned}$$
(113)

By direct computations we check that

$$\begin{aligned} R\omega (e_{3},e_{2},e_{1},e_{2})&=-\eta \lambda _1\omega (e_{3},e_{2}),\\ R\omega (e_{3},e_{1},e_{1},e_{2})&=-\eta \lambda _1\omega (e_{1},e_{3}),\\ R^2\omega (e_{3},e_{1},e_{1},e_{2},e_{1},e_{2})&=-\eta ^2{\lambda _1}^2\omega (e_{1},e_{3})+\eta ^2\lambda _1\omega (e_{2},e_{3}). \end{aligned}$$

It means that (111) is true for \(p=1\) and (112) is true for \(p=1,2\).

In order to show that (111) is true for all \(p\ge 1\), using (113) we compute that

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&=-R^p\omega (\eta \lambda _1e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad -R^p\omega (e_{1},e_{2},\eta \lambda _1e_{3},e_{2},\ldots ,e_{1},e_{2})\\&\quad \ldots \\&\quad -R^p\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},\eta \lambda _1e_{3},e_{2})\\&=-\eta \lambda _1R^p\omega (e_{3},e_{2},e_{1},e_{2},\ldots ,e_{1},e_{2}), \end{aligned}$$

since all terms but first are equal to 0 due to (109). Now by the induction principle (111) is true for all \(p\ge 1\).

To prove (112) let us assume that (112) is true for some \(p\ge 2\) we compute

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&=-R^p\omega (e_{1},\eta \lambda _1e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad -R^p\omega (e_{1},e_{2},e_{1},\eta \lambda _1e_{3},\ldots ,e_{1},e_{2})\\&\quad \ldots \\&\quad -R^p\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},e_{1},\eta \lambda _1e_{3})\\&=-\eta \lambda _1R^p\omega (e_{1},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad -\eta \lambda _1R^p\omega (e_{1},e_{2},e_{1},e_{3},\ldots ,e_{1},e_{2})\\&\quad \ldots \\&\quad -\eta \lambda _1R^p\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},e_{1},e_{3}). \end{aligned}$$

Now using (110) (for \(i=1\)) we obtain

$$\begin{aligned} R^{p+1}\omega&(e_{3},e_{1},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&=-\eta \lambda _1R^p\omega (e_{1},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad +\eta ^2\lambda _1R^{p-1}\omega (e_{2},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad +\eta ^2\lambda _1R^{p-1}\omega (e_{1},e_{2},e_{2},e_{3},\ldots ,e_{1},e_{2})\\&\quad \ldots \\&\quad +\eta ^2\lambda _1R^{p-1}\omega (e_{1},e_{2},\ldots ,e_{2},e_{3},e_{1},e_{2})\\&\quad -\eta \lambda _1R^p\omega (e_{1},e_{2},\ldots ,e_{1},e_{2},e_{1},e_{3})\\&=-\eta \lambda _1R^p\omega (e_{1},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&\quad +\eta ^2\lambda _1R^{p-1}\omega (e_{2},e_{3},e_{1},e_{2},\ldots ,e_{1},e_{2})\\&=\eta \lambda _1 \Big (-\eta ^p\lambda _1^p\omega (e_{1},e_{3}) +\frac{1}{2}((-1)^p+1)\eta ^p\lambda _1^{p-1}\omega (e_{2},e_{3})\Big )\\&\quad -\eta ^2\lambda _1(-1)^{p-1}\eta ^{p-1}\lambda _1^{p-1}\omega (e_{3},e_{2})\\&=-\eta ^{p+1}\lambda _1^{p+1}\omega (e_{1},e_{3})+\frac{1}{2}((-1)^p+1+2\cdot (-1)^{p-1})\eta ^{p+1}\lambda _1^{p}\omega (e_{2},e_{3})\\&=-\eta ^{p+1}\lambda _1^{p+1}\omega (e_{1},e_{3})+\frac{1}{2}((-1)^{p+1}+1)\eta ^{p+1}\lambda _1^{p}\omega (e_{2},e_{3}) \end{aligned}$$

where the last equalities are consequence of (112) and (111) (for \(p-1\)). By the induction principle (112) is true for all \(p\ge 1\). \(\square \)

Theorem 5.6

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(R^p\omega =0\) for some \(p\ge 1\) then for every point \(x\in M\) either \(S=0\) in x or there exists a basis \({e_1,\ldots ,e_{2n}}\) of \(T_xM\) such that S in this basis has the form

$$\begin{aligned} S=\left[ \begin{matrix} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \ldots \\ 0 &{} 0 &{}0 &{} \ldots &{} 0 \end{matrix}\right] \end{aligned}$$
(114)

Proof

From Theorem 4.17 we have that Jordan block decomposition of S do not contain complex blocks. From Theorem 3.15 we also know that S contains at most one real Jordan block of dimension 2 and remaining blocks are all of dimension 1.

Now (rearranging vectors \({e_1,\ldots ,e_{2n}}\) if needed) S and h can be represented in one of the following two forms:

$$\begin{aligned} S=\left[ \begin{matrix} \lambda _1 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad \lambda _2 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad \lambda _3 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad \lambda _{2n} \end{matrix}\right] , \quad h=\left[ \begin{matrix} \varepsilon _1 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad \varepsilon _2 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad \varepsilon _3 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad \varepsilon _{2n} \end{matrix}\right] \end{aligned}$$
(115)

or

$$\begin{aligned} S=\left[ \begin{matrix} \alpha &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 1 &{} \quad \alpha &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad \lambda _1 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0&{} \quad \ldots &{} \quad \lambda _{2n-2} \\ \end{matrix}\right] , \quad h=\left[ \begin{matrix} 0 &{} \quad \eta &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ \eta &{} \quad 0 &{} \quad 0 &{} \quad \ldots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad \varepsilon _1 &{} \quad \ldots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0&{} \quad \ldots &{} \quad \varepsilon _{2n-2} \\ \end{matrix}\right] \end{aligned}$$
(116)

where \(\varepsilon _{i}\in \{-1,1\}\) for \(i=1,\ldots ,2n\), \(\eta \in \{-1,1\}\) and

$$\begin{aligned} |\lambda _1|\ge \cdots \ge |\lambda _{2n}|. \end{aligned}$$
(117)

We need to show that \(\lambda _1=\cdots =\lambda _{2n}=0\) (respectively \(\alpha =0\) and \(\lambda _1=\cdots =\lambda _{2n-2}=0\)).

First assume that (115) holds. Since \(\omega \) is non-degenerate there exists \(i_0\) such that \(\omega (e_1,e_{i_0})\ne 0\). If \(i_0>2\) then using Lemma 5.1 (\(s=2n\), \(k=1\), \(j=2\), \(i=i_0\)) we get

$$\begin{aligned} R^{2p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_{i_0})=(-1)^p\varepsilon _1^p\varepsilon _2^p \lambda _1^p\lambda _2^p\omega (e_1,e_{i_0}). \end{aligned}$$

If \(i_0=2\) then from Lemma 5.1 (\(s=2n\), \(k=1\), \(j=3\), \(i=i_0\)) we get

$$\begin{aligned} R^{2p}\omega (e_1,e_3,e_1,e_3,\ldots ,e_1,e_3,e_1,e_{i_0})=(-1)^p\varepsilon _1^p\varepsilon _3^p \lambda _1^p\lambda _3^p\omega (e_1,e_{i_0}). \end{aligned}$$

Since \(R^p\omega =0\) then also \(R^{2p}\omega =0\) and the above implies that

$$\begin{aligned} (-1)^p\varepsilon _1^p\varepsilon _2^p \lambda _1^p\lambda _2^p\omega (e_1,e_{i_0})=(-1)^p\varepsilon _1^p\varepsilon _3^p \lambda _1^p\lambda _3^p\omega (e_1,e_{i_0})=0. \end{aligned}$$

Taking into account (117) we deduce that \(\lambda _2=\lambda _3=0\) if \(i_0>2\) and \(\lambda _3=\lambda _4=0\) if \(i_0=2\). Now, if \(i_0\ne 4\) using Lemma 5.2 (\(X=e_1\), \(Y=e_{i_0}\), \(Z_1=e_3\), \(Z_4=e_4\)) we get

$$\begin{aligned} R^{2p}\omega (e_1,\underbrace{e_3,e_4,e_3,e_4,\ldots ,e_3,e_4}_{4p},e_{i_0})=(-1)^p\lambda _1^{2p}\varepsilon _3^p \varepsilon _4^p\omega (e_1,e_{i_0}). \end{aligned}$$

If \(i_0=4\) then we have that also \(\lambda _2=0\) and in this case from Lemma 5.2 (\(X=e_1\), \(Y=e_{i_0}\), \(Z_1=e_2\), \(Z_2=e_3\)) we get

$$\begin{aligned} R^{2p}\omega (e_1,\underbrace{e_2,e_3,e_2,e_3,\ldots ,e_2,e_3}_{4p},e_{i_0})=(-1)^p\lambda _1^{2p}\varepsilon _2^p \varepsilon _3^p\omega (e_1,e_{i_0}). \end{aligned}$$

The above implies that \(\lambda _1=0\) and thanks to (117) we get that \(S=0\).

Now assume that S and h have the form (116). First we shall show that \(\alpha =0\). For this purpose note that from Corollary 3.2 (\(k=2\)) we have

$$\begin{aligned} R^p\omega (e_1,e_2,\ldots ,e_1,e_2,e_i,e_2)=\eta ^p\alpha ^p\omega (e_i,e_2) \end{aligned}$$

for \(i\in \{2,\ldots ,2n\}\). From Lemma 5.3 (\(\varepsilon =\varepsilon _1\)) we have

$$\begin{aligned} R^{p}\omega (e_1,e_2,e_1,e_2,\ldots ,e_1,e_2,e_1,e_3,e_1,e_3)=(-1)^p\cdot (2\eta \alpha )^{p-1}\varepsilon _1\omega (e_1,e_2). \end{aligned}$$

Since \(R^p\omega =0\) and \(\eta ,\varepsilon _1\ne 0\) we obtain

$$\begin{aligned} \alpha ^p\omega (e_2,e_2)=\cdots =\alpha ^p\omega (e_{2n},e_2)=\alpha ^{p-1}\omega (e_1,e_2)=0 \end{aligned}$$

and in consequence \(\alpha =0\) (since \(\omega \) is non-degenerate).

Now we are able to show that \(\lambda _1=\cdots =\lambda _{2n-2}=0\). Indeed, from (117) it follows that it is enough to show that \(\lambda _1=0\). Since \(\alpha =0\), the basis \(\{e_{1},\ldots ,e_{2n}\}\) satisfy conditions of Lemma 5.4 and Lemma 5.5. Thus using formulas (104), (111) and (112) and taking into account that \(R^p\omega =0\) we obtain

$$\begin{aligned} -\eta ^p{\lambda _1}^p\omega (e_i,e_{3})&=0 \qquad \text {for }i=3,\ldots ,2n, \end{aligned}$$
(118)
$$\begin{aligned} (-1)^p\eta ^p\lambda _1^p\omega (e_{3},e_{2})&=0, \end{aligned}$$
(119)
$$\begin{aligned} -\eta ^p\lambda _1^p\omega (e_{1},e_{3})+\frac{1}{2}\Big ((-1)^p+1\Big )&\eta ^p\lambda _1^{p-1}\omega (e_{2},e_{3})=0. \end{aligned}$$
(120)

Since \(\omega \) is non-degenerate there must exist \(i_0\in \{1,\ldots ,2n\}\setminus \{3\}\) such that \(\omega (e_{i_0},e_{3})\ne 0\). If \(i_0>3\) then from (118) we immediately get that \(\lambda _1=0\). If \(i_0=2\) then (119) implies that \(\lambda _1=0\). Eventually, if \(i_0=1\) and \(\omega (e_{2},e_{3})=0\) we obtain that \(\lambda _1=0\) from (120). The proof of the theorem is completed. \(\square \)

As a consequence of Theorem 5.6 we obtain

Theorem 5.7

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(R^k\omega =0\) for some \(k\ge 1\) then the shape operator S has the rank \(\le 1\).

Proof

From Theorem 5.6 it follows that for every \(x\in M\) either \(S_x=0\) or \(S_x\) is of the form (114) thus \({\text {rank}}S_x\le 1\) for every \(x\in M\). \(\square \)

Recall that we have the following lemma ([19]).

Lemma 5.8

([19]). Let T be a tensor of type (0, p) and let \(\nabla \) be an affine torsion-free connection. Then for every \(k\ge 1\) and for any \(2k+p\) vector fields \(X_{\pm 1}^1,\ldots ,X_{\pm 1}^k\), \(Y_1,\ldots ,Y_p\) the following identity holds:

$$\begin{aligned} (R^k&\cdot T)(X_1^1,X_{-1}^1,\ldots ,X_1^k,X_{-1}^k,Y_1,\ldots ,Y_p)\\ \nonumber&=\sum _{a\in \mathcal {J}}{\text {sgn}}a(\nabla ^{2k}T)(X_{a(1)}^1,X_{-a(1)}^1,\ldots ,X_{a(k)}^k,X_{-a(k)}^k,Y_1,\ldots ,Y_p), \end{aligned}$$
(121)

where \(\mathcal {J}=\{a:I_k\rightarrow \{-1,1\}\}\) and \({\text {sgn}}a:=a(1)\cdot \ldots \cdot a(k)\).

From Theorem 5.7 and Lemma 5.8 we have the following

Theorem 5.9

Let \(f:M\rightarrow \mathbb {R}^{2n+1}\) (\(\dim M\ge 4\)) be a non-degenerate affine hypersurface with a locally equiaffine transversal vector field \(\xi \) and an almost symplectic form \(\omega \). If \(\nabla ^p\omega =0\) for some \(p\ge 1\) then the shape operator S has the rank \(\le 1\).

We conclude this section with the following example

Example 5.10

Let \(n\ge 2\) and let \(\gamma ,\alpha _i:\mathbb {R}\rightarrow \mathbb {R}^{2n+1}\) be curves given as follows:

$$\begin{aligned} \gamma (t)&:=(\cos t,\sin t,0,\ldots ,0)^T; \\ \alpha _i(t)&:=(0,\ldots ,0,\overbrace{\cos t}^{(i+3)},\overbrace{\sin t}^{(i+4)},0,\ldots ,0)^T \end{aligned}$$

for \(i=0,\ldots ,2n-3\). Let \(\varepsilon _i\in \{-1,1\}\) for \(i=1,\ldots ,2n-3\).

Now let us consider an immersion \(f:\mathbb {R}\setminus \{0\}\times (0,\infty )\times (0,\frac{\pi }{2})^{2n-2}\rightarrow \mathbb {R}^{2n+1}\) given by the formula:

$$\begin{aligned} f(x,y,z_0,\ldots ,z_{2n-3}):=y\gamma '(x)+x\alpha _0(z_0)+\sum _{i=1}^{2n-3}\varepsilon _i\alpha _i(z_i) \end{aligned}$$

together with the transversal vector field

$$\begin{aligned} \xi :\mathbb {R}\setminus \{0\}\times (0,\infty )\times (0,\frac{\pi }{2})^{2n-2}\ni (x,y,z_0,\ldots ,z_{2n-3})\mapsto -\gamma (x)\in \mathbb {R}^{2n+1}. \end{aligned}$$

By straightforward computations we get

$$\begin{aligned} h=\left[ \begin{matrix} 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad xy &{} \quad 0 &{} \quad \cdots &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \varepsilon _1 y\frac{\sin (z_0)}{\cos (z_1)} &{} \quad \cdots &{} \quad 0 \\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad \varepsilon _{2n-3}y\frac{\sin (z_0)}{\cos (z_{2n-3})}\prod _{i=1}^{2n-4}{\tan {z_i}} \\ \end{matrix}\right] \end{aligned}$$

and

$$\begin{aligned} S=\left[ \begin{matrix} 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0\\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 \end{matrix}\right] ,\quad \tau =0. \end{aligned}$$

Thus f is a non-degenerate equiaffine hypersurface with the second fundamental form of signature \((1,-1,\varepsilon _0,\varepsilon _{1},\ldots ,\varepsilon _{2n-3})\) where \(\varepsilon _0=1\) for \(x>0\) and \(\varepsilon _0=-1\) for \(x<0\). Note also that \(R\ne 0\) since \(S\ne 0\).

Now let us define

$$\begin{aligned} \Omega =[\omega _{i,j}]_{i,j=1\ldots 2n} \end{aligned}$$

such that \(\omega _{i,j}=-\omega _{j,i}\) and \(\det \Omega \ne 0\). That is \(\Omega \) is a symplectic form. We easily check that

$$\begin{aligned} R\Omega \Big (\frac{\partial }{\partial {x}},\frac{\partial }{\partial {z_0}},\frac{\partial }{\partial {x}},\frac{\partial }{\partial {z_0}}\Big )=-xy\omega _{1,2} \end{aligned}$$

and

$$\begin{aligned} R^2\Omega \Big (\frac{\partial }{\partial {x}},\frac{\partial }{\partial {z_0}},\frac{\partial }{\partial {x}},\frac{\partial }{\partial {z_0}},\frac{\partial }{\partial {x}},\frac{\partial }{\partial {z_0}}\Big )=xy\omega _{2,3} \end{aligned}$$

thus \(R\Omega \ne 0\) and \(R^2\Omega \ne 0\) if only \(\omega _{1,2}\ne 0, \omega _{2,3}\ne 0\). On the other hand one may show that \(R^3\Omega = 0\).