1 Problem and results

The analysis of the integrability of nonlinear systems plays an important role in applied sciences. On the one hand, it allows understanding fundamental properties of mathematical models of various phenomena, systems, and devices and on the other hand it inspires the development of various branches of applied mathematics enabling to perform this analysis effectively. Diversity of systems and their specific behaviours cause that various methods to their analysis are applied. Among them, the most valuable are methods applicable to wide classes of systems and allowing for the formulation of simple integrability criteria.

Recently, one of such sources of integrability criteria uses the properties of the differential Galois theory of variational equations obtained by the linearization of the considered nonlinear system along a certain particular solution. This approach has appeared very effective for Hamiltonian systems in the flat phase space with the following Hamilton function

$$\begin{aligned} H=\tfrac{1}{2}\varvec{p}^2+V(\varvec{q}), \end{aligned}$$
(1)

where \(\varvec{q}=(q_1,\ldots ,q_n)\), \(\varvec{p}=(p_1,\ldots ,p_n)\) and potential \(V(\varvec{q})\) which is a homogeneous function of coordinates of integer degree k. For such systems, the necessary integrability condition is expressible by means of eigenvalues of the Hessian matrix of potential \(V''(\varvec{d})\) calculated at specific points satisfying the condition \(V'(\varvec{d})=\varvec{d}\), Namely, all eigenvalues of \(V''(\varvec{d})\) at every \(\varvec{d}\) must belong to certain sets of rational numbers of the form depending on k as it was shown in [21, 22]. Application of this condition to a specific potential is very simple, namely it reduces to calculations of eigenvalues of \(V''(\varvec{d})\). Moreover, for polynomial potentials has appeared that some universal relations between eigenvalues of \(V''(\varvec{d})\) calculated at various \(\varvec{d}\) exist and they give additional conditions for the integrability [17, 27, 28]. All of these conditions made it possible to start the project of classification all integrable potentials [18].

A natural extension of the above-mentioned systems is given by Hamiltonians with velocity-dependent potentials, called also systems with gyroscopic or magnetic terms. In the case of two degrees of freedom they are given by the following Hamiltonian function

$$\begin{aligned} H_0 = \frac{1}{2}(p_1^2+p_2^2) + \omega (p_1q_2-p_2q_1) + V(q_1,q_2), \end{aligned}$$
(2)

where the potential \( V(q_1,q_2) \) is, in general, an algebraic function. These types of systems are common in physics, astronomy, and geometry. Let us mention several examples.

The Hamiltonian function of the famous restricted circular three-body problem has exactly the form (2) with potential \(V(q_1,q_2)=V_{\mathrm {CR}}(q_1,q_2)\) given by the following formula

$$\begin{aligned} V_{\mathrm {CR}}= & {} -\frac{1-\mu }{\sqrt{(q_1-\mu )^2 + q_2^2}}-\frac{\mu }{\sqrt{(q_1-\mu +1)^2 + q_2^2}},\nonumber \\ \end{aligned}$$
(3)

see, e.g. [3, 30, 31]. It describes the dynamics of a massless particle in the gravity field of two primaries, with masses \(1-\mu \) and \(\mu \), moving in circular orbits around their mass centre.

Motion of a satellite in the equatorial plane of a flattened planet is described by the Hamiltonian (2), where \(V(q_1, q_2)\) is usually given by a harmonic expansion of the gravity field of the planet, see, e.g. [11, 31].

The term linear in momenta in (2) can describe the influence of a constant magnetic field directed perpendicularly to the plane of a charged particle motion. Indeed, Hamiltonian

$$\begin{aligned} H_{\mathrm {EM}}=\tfrac{1}{2}(p_1-A_1)^2 +\tfrac{1}{2}(p_2-A_2)^2 + U(q_1,q_2), \end{aligned}$$
(4)

describes the motion of a charge in an electromagnetic field given by vector and scalar potentials \(\varvec{A}(q_1,q_2)=(A_1(q_1,q_2),A_2(q_1,q_2))\) and \(U(q_1,q_2) \), respectively. If \(\varvec{B}=(0,0,B)\), then vector potential is \(\varvec{A}=(-\omega q_2,\omega q_1)\), where \(\omega =B/2\) and Hamiltonian (4) takes the form (2) with \(V(q_1,q_2)=\tfrac{\omega ^2}{2}(q_1^2+q_2^2)\) \(+ U(q_1,q_2)\). Hamiltonians of this form appear in electromagnetic traps for charged particles where a combination of axial symmetric electrostatic field and constant magnetic fields is used, see, e.g. [32].

To apply the differential Galois theory which concerns linear differential equations, we have to know a particular solution of the investigated nonlinear system to make linearization along it. Efficiency of this approach to Hamiltonian systems with homogeneous potentials given in (1) is related to the presence of straight-line particular solutions directed along \(\varvec{d}\) satisfying \(V'(\varvec{d})=\varvec{d}\).

In general, there is not a general method for finding a non-equilibrium solution of a nonlinear system of differential equations. In particular, it is true for systems given by Hamiltonian (2). Only in exceptional cases, assuming a very special form of the potential \( V(q_1,q_2) \), one can find such a solution.

To overcome this difficulty, we use the following idea. First, we assume that the potential \( V(q_1,q_2) \) is rational and homogeneous function of degree k, and then we perform the classical Levi-Civita regularization, see the next section. Indeed, we obtain a system given by Hamiltonian

$$\begin{aligned} \begin{aligned} K_0=&\frac{1}{2}\left( v_1^2+v_2^2\right) + 2\omega (u_1^2+u_2^2)\left( u_2v_1-u_1v_2\right) \\&+4(u_1^2+u_2^2) W\left( u_1, u_2\right) -4h(u_1^2+u_2^2), \end{aligned} \end{aligned}$$
(5)

where \((u_1,u_2,v_1,v_2)\) are canonical coordinates, h is a complex parameter, and

$$\begin{aligned} W\left( u_1, u_2\right) = V\left( u_1^2-u_2^2,2u_1u_2\right) . \end{aligned}$$

This system has a family of particular solutions and using them we can prove the following theorem

Theorem 1

Assume that the system given by Hamiltonian (5) satisfies the following conditions:

  1. 1.

    \( V(q_1,q_2) \) is a homogeneous rational function of degree \( k\in \mathbb {Z}\) and \( |k |>2 \),

  2. 2.

    \( V(1,\mathrm {i})\ne 0 \), or \( V(-1,\mathrm {i})\ne 0\),

then, for an arbitrary h, it does not admit an additional first integral which is a rational function of variables \((u_1,u_2, v_1, v_2) \) and is functionally independent with \( K_0\).

Miraculously, in spite of the fact that for application of differential Galois methods a particular solution is needed, it seems that somehow we avoided this requirement. Moreover, it is really amazing that the addition of the gyroscopic term to the Hamiltonian ‘almost always’ causes its non-integrability. Finally, the necessary conditions for the integrability are surprisingly simple. However, in spite of these advantages, we cannot conclude the original system given by Hamiltonian (2) is not integrable. We explain this point later and show an example with potential satisfying assumptions of Theorem 1 for which Hamilton’s equations given by (2) are even super-integrable, but after regularization obtained Hamiltonian (5) does not possess an additional first integral.

However, the non-integrability of regularized Hamiltonian (5) implies the non-integrability of other system given by the following Hamiltonian function

$$\begin{aligned} H_\mu= & {} \frac{1}{2}(p_1^2+p_2^2) + \omega (p_1q_2-p_2q_1)\nonumber \\&-\frac{\mu }{r}+ V(q_1,q_2), \end{aligned}$$
(6)

where the potential \( V(q_1,q_2) \) is a rational homogeneous function of degree k, and \( r^2=q_1^2+q_2^2 \). Namely, we prove the following result.

Theorem 2

If the system given by Hamiltonian (6) satisfies the following conditions:

  1. 1.

    \( \mu \omega \ne 0 \),

  2. 2.

    \( V(q_1,q_2) \) is a homogeneous rational function of degree \( k\in \mathbb {Z}\) and \( |k |>2 \),

  3. 3.

    \( V(1,\mathrm {i})\ne 0 \), or \( V(-1,\mathrm {i})\ne 0 \),

then it does not admit an additional first integral which is a rational function of variables \( (q_1,q_2, p_1, p_2,r,\mu ) \) and is functionally independent with \( H_\mu \).

The possibility of the translation of non-integrability of regularized Hamiltonian systems  (5) into non-integrability of these given by (6) follows from the fact that both Hamiltonians  (5) and (6) are related to each other by the so-called coupling constant metamorphosis transformation which preserves first integrals.

As we will see, the proof of this theorem is based on Theorem 1. The case when \( V(q_1,q_2) \) is a homogeneous function of degree \(k=2\) is also important. Let us recall that for the classical Hill problem [13], and its various generalization [2, 4, 25], the Hamiltonian has the form (6) with a homogeneous polynomial potential \( V(q_1,q_2) \) of degree \(k=2\). This case is excluded in our theorem. To study its integrability we developed other method, see [8].

As mentioned above, we apply methods of differential Galois theory. In the context of Hamiltonian systems, the fundamental result of this approach is formulated in the Morales–Ramis theorem.

Theorem 3

Assume that a complex Hamiltonian system with n degrees of freedom is integrable with complex meromorphic first integrals in the Liouville sense in a neighbourhood of a phase curve \( \Gamma \). Then, the identity component of the differential Galois group of the variational equations along \( \Gamma \) is Abelian.

This theorem gives the necessary integrability conditions, and it can be applied to a wide class of systems provided a non-equilibrium particular solution is known. Usually the Hamiltonian depends on certain parameters characterizing the system and forces acting on it. Typically, using this theorem one can prove non-integrability of the considered system for almost all values of parameters. The further integrability analysis can be restricted to only these exceptional values for which the necessary integrability conditions are fulfilled.

For details about the notions used in this theorem and its derivation, see the book of Morales Ruiz [21] or Audin [1]. A practical introduction to the subject and numerous applications of the above theorem can be found in [23] and the shortcut description with one detailed example in [16]. Theorem 3 is one of the strongest tools for proving non-integrability. It is enough to mention that the century question about integrability of the three-body problem was answered thanks to application of this theorem.

According to the requirement of the above theorem, we assume that our system is complex. That is, \( (q_1,q_2,p_1,p_2)\in \mathbb {C}^4\) and \( V(q_1,q_2) \) is algebraic over \( \mathbb {C}(q_1,q_2) \).

It is important to notice that the Hamiltonian (6) is not a meromorphic function because of the term \( \mu /\sqrt{q_1^2 +q_2^2} \). However, as it was explained in [6, 19], still we can apply the Morales–Ramis theory for systems with algebraic potentials.

The plan of the paper is the following. In Sect. 2, the application of the Levi-Civita regularization to Hamiltonian \(H_\mu \) (6) and the relation between first integrals in the original and the corresponding regularized Hamiltonian systems is described. In Sect. 3, variational equations of the regularized Hamiltonian system generated by \(K_0\) (5) are derived and written as two second-order reduced differential equations: one homogeneous which is the Gauss hypergeometric equation and the second non-homogeneous with the same homogeneous part. Conditions that identity components of differential Galois groups of these homogeneous and non-homogeneous differential equations are isomorphic to the additive subgroup of \(\mathrm {SL}(2,\mathbb {C})\) are formulated in Sect. 4. These conditions reduce to checking whether the linear combinations of certain primitives are algebraic. Application of this condition to considered variational equations is given in Sect. 5. For the convenience of the reader, in Appendix, we recall basic facts about local and global monodromy of the Gauss hypergeometric equation necessary to follow this section. In Sect. 6, proofs of Theorems 1 and 2 are presented. In Sect. 7, an example of super-integrable Hamiltonian system \(H_0\) for which the Levi-Civita regularization transforms into non-integrable Hamiltonian system satisfying conditions of Theorem 1 is shown.

2 Levi-Civita regularization

We will use vector notation to shorten some expressions. In this way, we denote \(\varvec{q}=[q_1,q_2]^T\) and \(\varvec{p}= [p_1, p_2]\).

At first, we perform the transformation \( (\varvec{q}, \varvec{p})\mapsto (\varvec{u},\varvec{v}) \) given by the following formulae

$$\begin{aligned} \begin{aligned} q_1=&\left( u_1^2-u_2^2\right) ,&p_1=&\frac{u_1v_1-u_2v_2}{2(u_1^2+u_2^2)}, \\ q_2=&2u_1u_2,&p_2=&\frac{u_2v_1+u_1v_2}{2(u_1^2+u_2^2)}. \end{aligned} \end{aligned}$$
(7)

This coordinate change is symplectic, and thus it transforms Hamiltonian (6) to the following one

$$\begin{aligned} \begin{aligned} {\widetilde{K}}_\mu =&\frac{v_1^2+v_2^2}{8(u_1^2+u_2^2)}+ \frac{1}{2}\omega (u_2v_1-u_1v_2)\\&-\frac{\mu }{u_1^2+u_2^2}+ W(u_1,u_2), \end{aligned} \end{aligned}$$
(8)

where \( W\left( u_1, u_2\right) = V\left( u_1^2-u_2^2,2u_1u_2\right) \). The corresponding equations of motion read

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t}{u}_1=&\frac{v_1}{4(u_1^2+u_2^2)}+\frac{1}{2}\omega u_2, \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t}{u}_2=&\frac{v_2}{4(u_1^2+u_2^2)}-\frac{1}{2}\omega u_1,\nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t}{v}_1=&\frac{(v_1^2+v_2^2)u_1}{4(u_1^2+u_2^2)^2}+\frac{1}{2}\omega v_2 -\frac{2\mu u_1}{(u_1^2+u_2^2)^2}- \dfrac{\partial W }{\partial u_1},\nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t}{v}_2=&\frac{(v_1^2+v_2^2)u_2}{4(u_1^2+u_2^2)^2}-\frac{1}{2}\omega v_1 -\frac{2\mu u_2}{(u_1^2+u_2^2)^2}- \dfrac{\partial W }{\partial u_2}, \end{aligned}$$
(9)

The crucial step in our considerations is the following time change \(t\mapsto \tau \), given by

$$\begin{aligned} \frac{\mathrm {d}t}{\mathrm {d}\tau }= 4(u_1^2 +u_2^2). \end{aligned}$$
(10)

It is equivalent to the multiplication of the above vector field (9) by \( 4(u_1^2+u_2^2) \). Thus, in the new time the system reads

$$\begin{aligned} \begin{aligned} \dot{u}_1=&v_1+2\omega u_2(u_1^2+u_2^2), \\ \dot{u}_2=&v_2-2\omega u_1(u_1^2+u_2^2), \\ \dot{v}_1=&\frac{(v_1^2+v_2^2)}{(u_1^2+u_2^2)}u_1+ 2\omega v_2(u_1^2+u_2^2) -\frac{8\mu u_1}{u_1^2+u_2^2}\\&- 4 (u_1^2+u_2^2)\dfrac{\partial W }{\partial u_1},\\ \dot{v}_2=&\frac{(v_1^2+v_2^2)}{(u_1^2+u_2^2)}u_2-2\omega v_1(u_1^2+u_2^2) -\frac{8\mu u_2}{u_1^2+u_2^2}\\&- 4(u_1^2+u_2^2)\dfrac{\partial W }{\partial u_2}, \end{aligned} \end{aligned}$$
(11)

where now overdot denotes the differentiation with respect to \( \tau \).

Let us introduce function

$$\begin{aligned} K_\mu :=4(u_1^2+u_2^2)({{\widetilde{K}}}_\mu -h) = K_0 -4\mu , \end{aligned}$$
(12)

where

$$\begin{aligned} \begin{aligned} K_0=&\frac{1}{2}\left( v_1^2+v_2^2\right) + 2\omega (u_1^2+u_2^2)\left( u_2v_1-u_1v_2\right) \\&+4(u_1^2+u_2^2) \left[ W\left( u_1, u_2\right) -h\right] . \end{aligned} \end{aligned}$$
(13)

Then, we notice that the system (11) can be written as

$$\begin{aligned} \begin{aligned} \dot{u}_1=&\dfrac{\partial K_\mu }{\partial v_1}, \\ \dot{u}_2=&\dfrac{\partial K_\mu }{\partial v_2}, \\ \dot{v}_1=&- \dfrac{\partial K_\mu }{\partial u_1} +8 ({\widetilde{K}}_\mu -h)u_1,\\ \dot{v}_2=&- \dfrac{\partial K_\mu }{\partial u_2} +8({\widetilde{K}}_\mu -h)u_2. \end{aligned} \end{aligned}$$
(14)

Hence, on the level \( {\widetilde{K}}_\mu =h \) system (11) coincides with Hamiltonian vector field generated by Hamiltonian \( K_\mu \). The above construction explains the origin of Hamiltonian \( K_0 \) given by (5).

Let us assume that \(F(\varvec{q}, \varvec{p})\) is a first integral of the system generated by Hamiltonian (6). After transformation (7) we obtain the first integral \({\widetilde{F}}(\varvec{u},\varvec{v})\) of the transformed system (9), that is \(\{{\widetilde{F}},{\widetilde{K}}_\mu \}=0 \), where \(\{\cdot ,\cdot \}\) denotes the Poisson bracket. Function \({\widetilde{F}}\) is also a first integral of system (11) and of system (14). However, by a simple calculation, we find that

$$\begin{aligned} \{{\widetilde{F}},K_\mu \}=4 ({\widetilde{K}}_\mu -h)\{{\widetilde{F}},u_1^2+u_2^2\}. \end{aligned}$$
(15)

The above shows the following statement.

Proposition 1

If \(F(\varvec{q}, \varvec{p})\) is a first integral of the system generated by \(H_{\mu }\) given in (6), then \({\widetilde{F}}(\varvec{u},\varvec{v})\) is a first integral of the system generated by Hamiltonian \(K_{\mu }\) given in (12) restricted to its zero level \(K_{\mu }=0\).

In particular, it is possible that the system generated by \(H_{\mu }\) is integrable, but the system generated by \(K_{\mu }\) is not integrable. We give an example in Sect. 7.

3 Particular solutions and variational equations

Hamiltonian system governed by \(K_{\mu }\) admits invariant planes. To see them explicitly, it is convenient to make the following canonical change of variables proposed in [24]

$$\begin{aligned} \begin{bmatrix} \varvec{u}\\ \varvec{v}\end{bmatrix} = \begin{bmatrix} \varvec{A}&{} 0 \\ 0 &{} \varvec{A}^{-1} \end{bmatrix} \begin{bmatrix} \varvec{x}\\ \varvec{y}\end{bmatrix} ,\qquad \varvec{A}=\frac{1}{\sqrt{2}} \begin{bmatrix} 1 &{} \mathrm {i}\\ \mathrm {i}&{} 1 \end{bmatrix} , \end{aligned}$$
(16)

where \(\varvec{x}=[x_1,x_2]^T\) and \(\varvec{y}=[y_1,y_2]^T\). After this transformation, Hamiltonian (12) takes the form

$$\begin{aligned} \begin{aligned} {{\mathscr {K}}}_\mu =&-\mathrm {i}(y_1 y_2+8 h x_1 x_2) -4 \omega x_1 x_2 (x_1 y_1 - x_2 y_2)\\&+ 8\mathrm {i}x_1 x_2V({\widetilde{x}}_1,{\widetilde{x}}_2) -4\mu , \end{aligned}\nonumber \\ \end{aligned}$$
(17)

where \({\widetilde{x}}_1=x_1^2-x_2^2\) and \({\widetilde{x}}_2=\mathrm {i}(x_1^2+x_2^2)\). Removing the constant term that does not affect the dynamics of the system, we obtain

$$\begin{aligned} \begin{aligned} {{\mathscr {K}}}_0=&-\mathrm {i}(y_1 y_2+8 h x_1 x_2) -4 \omega x_1 x_2 (x_1 y_1 - x_2 y_2)\\&+ 8\mathrm {i}x_1 x_2V({\widetilde{x}}_1,{\widetilde{x}}_2), \end{aligned} \end{aligned}$$

which is transformed by (16) to regularized Hamiltonian \(K_0\).

Hamilton’s equations of \({{\mathscr {K}}}_0\) have the form

$$\begin{aligned} \begin{aligned} \dot{x}_1=&-\mathrm {i}y_2-4\omega x_1^2x_2,\\ \dot{x}_2=&-\mathrm {i}y_1+4\omega x_1x_2^2,\\ \dot{y}_1=&8\mathrm {i}\left( h-V({\widetilde{x}}_1,{\widetilde{x}}_2)\right) x_2+ 4\omega (2 x_1 y_1 - x_2 y_2) x_2\\&-16 x_1^2x_2 \left( \mathrm {i}\frac{\partial V}{\partial {\widetilde{x}}_1}- \frac{\partial V}{\partial {\widetilde{x}}_2}\right) ,\\ \dot{y}_2=&8\mathrm {i}\left( h-V({\widetilde{x}}_1,{\widetilde{x}}_2)\right) x_1+4\omega (x_1 y_1 - 2x_2 y_2) x_1\\&+16 x_1x_2^2 \left( \mathrm {i}\frac{\partial V}{\partial {\widetilde{x}}_1}+\frac{\partial V}{\partial {\widetilde{x}}_2}\right) . \end{aligned} \end{aligned}$$
(18)

Now, one can easily notice that the two planes

$$\begin{aligned} \begin{aligned} {{\mathscr {M}}}_1&=\{(x_1,x_2,y_1,y_2)\in \mathbb {C}^4\ |\ x_2=y_1=0\},\\ {{\mathscr {M}}}_2&=\{(x_1,x_2,y_1,y_2)\in \mathbb {C}^4\ |\ x_1=y_2=0\}, \end{aligned} \end{aligned}$$

are invariant. Let us restrict our system to plane \( {{\mathscr {M}}}_1 \)

$$\begin{aligned} \begin{aligned} \dot{x}_1&=-\mathrm {i}y_2,\\ \dot{y}_2&=8\mathrm {i}\left( h-V(x_1^2,\mathrm {i}x_1^2)\right) x_1 =8\mathrm {i}\left( h-b x_1^{2k}\right) x_1. \end{aligned} \end{aligned}$$
(19)

In the last equality we used homogeneity of potential V which gives \( V(x_1^2,\mathrm {i}x_1^2)=x_1^{2k} V(1,\mathrm {i})=x_1^{2k} b \), where \( b:=V(1,\mathrm {i}) \). This restricted system has the first integral

$$\begin{aligned} e=-\frac{1}{2}y_2^2+4x_1^2\left( \frac{b}{k+1}x_1^{2k}-h\right) . \end{aligned}$$
(20)

The variational equations around this particular solution, read

$$\begin{aligned} \begin{bmatrix} \dot{X}_1 \\ \dot{Y}_2 \\ \dot{X}_2\\ \dot{Y}_1 \end{bmatrix} = \begin{bmatrix} 0 &{} -\mathrm {i}&{} -4\omega x_1^2 &{} 0 &{} \\ a_{12} &{} 0 &{} -8\omega x_1y_2 &{} 4\omega x_1^2 \\ 0 &{} 0 &{} 0 &{} -\mathrm {i}\\ 0 &{} 0 &{} a_{12} &{} 0 \end{bmatrix} \begin{bmatrix} X_1 \\ Y_2 \\ X_2 \\ Y_1 \end{bmatrix} , \end{aligned}$$
(21)

where \(a_{12}= 8\mathrm {i}\left( h-(2k+1)bx_1^{2k}\right) \). Let us notice that equations for variables \(X_2\) and \(Y_1\) form a closed subsystem. We rewrite the above equations as a system of second-order equations

$$\begin{aligned} \begin{aligned} {\ddot{X}}_2&= c(t)X_2,\\ {\ddot{X}}_1&= c(t) X_1+d(t) X_2, \end{aligned} \end{aligned}$$
(22)

where

$$\begin{aligned} c(t):= & {} 8\left[ h-(2k+1)bx_1^{2k}\right] ,\nonumber \\ d(t)= & {} 16\mathrm {i}\omega x_1y_2. \end{aligned}$$
(23)

Let us notice that the homogeneous part of the second equation is equal to the first equation. This means that solvability of whole system (22) depends on the solvability of the first homogeneous equation. Indeed, if we know the general solution of the homogeneous equation, then the general solution of the second equation can be obtained by application of the method of constants variation.

Now we assume that \( h\ne 0 \) and in this case we set \( e=0 \). The Yoshida transformation of the independent variable

$$\begin{aligned} t\mapsto z= \frac{b}{h(k+1)} x_1^{2k}(t), \end{aligned}$$
(24)

transforms the first Eq. (22) into the hypergeometric equation

$$\begin{aligned} X_2''+p(z)X_2'+ q(z)X_2=0,\quad '=\frac{\mathrm {d}}{\mathrm {d}z}, \end{aligned}$$
(25)

with rational coefficients

$$\begin{aligned} \begin{aligned} p(z)=&\frac{\ddot{z}}{(\dot{z})^2}=\frac{2-3z}{2z(1-z)},\\ q(z)=&\frac{c(t)}{(\dot{z})^2}= \frac{(1 +k)(1+ 2 k) z-1}{4 k^2 (1 - z) z^2}. \end{aligned} \end{aligned}$$

In these calculations we used formulae

$$\begin{aligned} \begin{aligned} \dot{z}=&-\frac{2\mathrm {i}bk}{h(k+1)} x_1^{2k-1}(t)y_2(t),\\ \ddot{z}=&\frac{16 b k^2}{h (1 + k)^2} x_1^{2k}(t)\left[ 2h(k+1)-3b x_1^{2k}(t)\right] , \end{aligned} \end{aligned}$$

which, after Yoshida’s transformation (24) can be rewritten as

$$\begin{aligned} (\dot{z})^2= 32 h k^2 (1 - z) z^2, \quad \ddot{z}=16 h k^2 z (2 - 3 z), \end{aligned}$$

where \( y_2(t) \) was calculated from the condition \( e=0 \), see (20). Coefficients c(t) and d(t) after transformation (24) change into

$$\begin{aligned} \begin{aligned} c(z)=&8 h \left[ -1 + (1 + k) (1 + 2 k) z\right] ,\\ d(z)=&32\sqrt{2}h^{\frac{1}{k}+ \frac{1}{2}}\left( \frac{b}{k+1}\right) ^{-\frac{1}{k}}\omega z^{\frac{1}{k}}\sqrt{1-z}. \end{aligned} \end{aligned}$$

We transform (25) to its reduced form by means of the Tchirnhauss transformation of dependent variables

$$\begin{aligned} \begin{aligned} X_1&=f(z) Y,\quad X_2=f(z) X,\\ f(z)&=\exp \left[ -\frac{1}{2}\int p(z) dz\right] = \frac{1}{\sqrt{z}(1-z)^{\frac{1}{4}}}. \end{aligned} \end{aligned}$$

Thus, finally we obtain the following system of linear equations

$$\begin{aligned}&X''=r(z)X, \end{aligned}$$
(26)
$$\begin{aligned}&Y''=r(z)Y+s(z) X, \end{aligned}$$
(27)

with coefficients

$$\begin{aligned} \begin{aligned}&r(z)= \frac{\rho ^2-1}{z^2}+ \frac{\sigma ^2-1}{(z-1)^2}+ \frac{1-\rho ^2-\sigma ^2+\tau ^2}{4z(z-1)},\\&s(z)=a\frac{z^{\frac{1}{k}-2}}{\sqrt{1-z}}, \qquad a=\frac{\sqrt{2} (k+1)^{\frac{1}{k}} b^{-\frac{1}{k}} h^{\frac{1}{k}-\frac{1}{2}}\omega }{k^2 } \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \rho =\frac{1}{k},\quad \sigma =\frac{1}{2},\quad \tau =\frac{1}{k}+\frac{3}{2}. \end{aligned}$$

Homogeneous Eq. (26) has solutions

$$\begin{aligned} \begin{aligned} g_1(z)=&(1-z)^{\frac{3}{4}}z^{\frac{k+1}{2k}},\\ g_2(z)=&g_1(z)\int \frac{{\mathrm{d}}z}{g_1^2(z)}\\ =&-k(1-z)^{\frac{3}{4}}z^{\frac{k-1}{2k}}F\left( \tfrac{3}{2}, -\tfrac{1}{k};\tfrac{k-1}{k};z\right) , \end{aligned} \end{aligned}$$

where \(F(\alpha ,\beta ;\gamma ;z)= {}_2F_1(\alpha ,\beta ;\gamma ;z) \) is the Gauss hypergeometric function, i.e. a holomorphic solution of the hypergeometric Eq. (A2), see Appendix. Let us introduce two integrals

$$\begin{aligned} \begin{aligned} \psi =&\int \frac{{\mathrm{d}}z}{g_1^2(z)}= \int \frac{z^{-\frac{k+1}{k}}}{(1-z)^{3/2}}{\mathrm{d}}z\\ =&-kz^{-\frac{1}{k}} F\left( \tfrac{3}{2},-\tfrac{1}{k};\tfrac{k-1}{k};z\right) \\ =&\,\,2(1-z)^{-1/2} F\left( -\tfrac{1}{2},1+\tfrac{1}{k};\tfrac{1}{2};1-z\right) \\&- \frac{2\sqrt{\pi }\Gamma \left( -\tfrac{1}{k} \right) }{ \Gamma \left( -\tfrac{k+2}{2k} \right) }, \end{aligned} \end{aligned}$$
(28)

where \(\Gamma (x)\) denotes the Euler gamma function and

$$\begin{aligned} \begin{aligned} \varphi&=\int s(z)g_1^2(z){\mathrm{d}}z=a\int z^{-1+\frac{2}{k}}(1-z){\mathrm{d}}z\\&=\frac{ak}{2(k+2)} z^{\frac{2}{k}}\left( k+2-2z\right) , \end{aligned} \end{aligned}$$
(29)

that appear during construction of solutions of Eqs. (26) and (27) and one more built by means of them

$$\begin{aligned} \begin{aligned}&I(z)=\int \psi '(z)\varphi (z){\mathrm{d}}z\\&\quad = \frac{c}{\sqrt{2} k }\int z^{-1+\frac{1}{k}}(1-z)^{-3/2} \left( k+2-2z\right) {\mathrm{d}}z \\&\quad =\sqrt{2} c \left[ \frac{z^{\frac{1}{k}}}{\sqrt{1-z}}- \sqrt{1-z}\,\, F\left( \tfrac{1}{2},1-\tfrac{1}{k};\tfrac{3}{2};1-z\right) \right] , \end{aligned} \nonumber \\ \end{aligned}$$
(30)

where the constant factor is

$$\begin{aligned} c=\frac{(k+1)^{\frac{1}{k}} b^{-\frac{1}{k}} h^{\frac{1}{k}-\frac{1}{2}}\omega }{k+2}. \end{aligned}$$

The integrability conditions will be expressed by means of these functions.

4 Non-integrability criterion

Since variational equations crucial for our integrability analysis have the form (27), we consider in detail in this section a system of two linear differential equations

$$\begin{aligned} x''&= rx, \end{aligned}$$
(31)
$$\begin{aligned} y''&= ry + sx, \end{aligned}$$
(32)

where coefficients r and s are elements of a differential field K with constant subfield \( \mathbb {C}\). Notice that the variational Eqs. (26)–(27) are of this form. We look for the conditions under which the identity component of the differential Galois group of the system (31)–(32) is Abelian. Our considerations presented here are just a modification of those contained in [9]. In particular, we extract facts from Section 2 and Section 3.3 of this reference. Let \( F_1 \) and \( F_2 \) denote the Picard–Vessiot fields of Eq. (31), and the system (31)–(32), respectively. The differential Galois groups of extensions \( F_i/ K\), for \( i=1,2\), are denoted by \( G_i \) and the identity components of \( G_i \) by \( G_i^{\circ }\). By L we denote the algebraic closure of differential field K in the field \(F_1\).

Hence, we look for the conditions under which \( G_2^{\circ } \) is Abelian. The field \( F_1 \) can be considered as a subfield of \( F_2 \), and group \( G_1 \) as a quotient of \( G_2 \). This is why we would like to express these conditions in terms of conditions on \(G_1^{\circ }\), and \(r, s \in K\).

Let \(\{x_1, x_2 \}\) denote a basis of solutions of (31) normalized in such a way that

$$\begin{aligned} W (x_1, x_2) = \det (X) = 1, \quad \text { where}\quad X = \begin{bmatrix} x_1 &{} x_2 \\ x_1' &{} x_2' \end{bmatrix}, \end{aligned}$$

is the fundamental matrix. For each \(\sigma \in G_2\), there exists a matrix \(A (\sigma ) \in \mathrm {SL}(2,\mathbb {C})\), such that \( \sigma (X) = XA(\sigma ) \). Our basic assumption is that the identity component \( G_1^{\circ } \) of \(G_1\) is isomorphic to the additive subgroup \( G_{\mathrm {a}} \) of \( \mathrm {SL}(2,\mathbb {C}) \). Then, we chose \( \{x_1, x_2 \} \) such that for all \(\sigma \in G_1^{\circ } \), the matrix \( A (\sigma ) \) is a unipotent upper triangular matrix, i.e.

$$\begin{aligned} G_{\mathrm {a}}=\left\{ A(\sigma )\in \mathrm {SL}(2,\mathbb {C})\ |\ A(\sigma )=\begin{pmatrix}1&{}a\\ 0&{}1\end{pmatrix},\, a\in \mathbb {C}\right\} . \end{aligned}$$

We have the following lemma.

Lemma 1

The group \( G_1^{\circ } \) is isomorphic to \( G_{\mathrm {a}} \), if and only if there exists a positive integer m such that \( x_1^m \in K \), and \( x_2 \) is transcendental over K. In this case, the algebraic closure of K in \( F_1 \) is \( L = K [x_1] \) .

This lemma is a part of a more general result proved by Kovacic [15]. See also the first point of Lemma 2.1 in [9].

Let \( x_1 \) be a nonzero algebraic solution of Eq. (31), which exists according to our assumptions. Then, the transcendental solution of this equation is given by \( x_2=x_1\psi \), \( \psi = \int x_1^{-2} \). Let \(y_i\) be a nonzero solution of (32) with \(x=x_i\), that is

$$\begin{aligned} y_i'' = ry_i + sx_i, \qquad i=1, 2. \end{aligned}$$
(33)

Using variations of constants method, we find

$$\begin{aligned} y_1 = x_1 \int \frac{\varphi }{x_1^2} ,\quad y_2 = x_2 \int \frac{\varphi }{x_2^2}, \quad \varphi = \int sx_1^2. \end{aligned}$$
(34)

In terms of integrals \( \varphi \) and \( \psi \) the necessary and sufficient conditions which guarantee that the group \( G_2^{\circ } \) is Abelian are given in the following theorem.

Theorem 4

Assume that group \( G_1^{\circ } \) is isomorphic to \( G_{\mathrm {a}} \). Then, the group \( G_2^{\circ } \) is Abelian if and only if the following condition is satisfied

$$\begin{aligned} \varphi \psi - 2 \int \varphi \cdot \psi ' = 2 \int \varphi ' \psi - \varphi \cdot \psi \in L [\psi ]. \end{aligned}$$
(35)

Moreover, if this condition is satisfied, then there exists \( c \in \mathbb {C}\) such that \( \varphi + c \psi \in L\).

This theorem is a part of Theorem 2.3 in [9]. Its proof proceeds in the following steps. First, the necessary and sufficient conditions have to be expressed in terms of properties of groups. Then, they are reformulated in terms of certain Wronskians and finally written in terms of primitives.

For the purpose of this paper, we reformulate this theorem.

Theorem 5

Assume that group \( G_1^{\circ } \) is isomorphic to \( G_{\mathrm {a}} \) and \( \varphi \in L \). Then, the group \( G_2^{\circ } \) is Abelian if and only if there exists \( \lambda \in \mathbb {C}\) such that

$$\begin{aligned} \lambda \psi + I \in L, \quad \text {where}\quad I=\int \varphi \cdot \psi '. \end{aligned}$$
(36)

Proof

As \( \varphi \in L \) and condition (35) is satisfied, \( I\in L[\psi ] \). Thus, they are algebraically dependent over L. Hence, by the Kolchin–Ostrowski theorem, see, e.g. [14], there exist constants \( \lambda \), \( \mu \in \mathbb {C}\), \( (\lambda ,\mu )\ne (0,0) \) such that

$$\begin{aligned} \lambda \psi + \mu I \in L. \end{aligned}$$
(37)

But \( \psi \) is not algebraic, so \( \mu \) cannot vanish, and we can assume that \( \mu =1 \). \(\square \)

5 Checking algebraicity of \(\lambda \psi + I\)

From the previous section, it follows that the test if the differential Galois group \( G_2^{\circ } \) of our variational Eq. (27) is Abelian reduces to checking whether exists constant \(\lambda \in \mathbb {C}\) such that the linear combination \(\lambda \psi + I\) is an algebraic function. Taking into account expressions for \(\psi \) given in (28) and for I in (30), omitting irrelevant algebraic terms we conclude that we should consider the following function

$$\begin{aligned} g(z) :=\lambda F(\alpha ,\beta ,\gamma ; z)+ zF({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z), \end{aligned}$$
(38)

where \( \lambda \in \mathbb {C}\), and parameters of the hypergeometric functions are as follows

$$\begin{aligned} \alpha&=-\frac{1}{2},&\beta&=1+\frac{1}{k},&\gamma&=\frac{1}{2}, \end{aligned}$$
(39)
$$\begin{aligned} {{\widetilde{\alpha }}}&=\frac{1}{2},&{{\widetilde{\beta }}}&= 1-\frac{1}{k},&{{\widetilde{\gamma }}}&=\frac{3}{2}. \end{aligned}$$
(40)

In the above formulae, k is a nonzero integer.

Thus, in our problem condition: ‘\(\lambda \psi + I\) is algebraic’ translates to the following: ‘there exist \(\lambda \in \mathbb {C}\) such that g(z) is algebraic’.

We introduce local solutions of the hypergeometric equation with parameters \(\alpha ,\beta ,\gamma \) and \({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}\) around singularity \(z=0\)

$$\begin{aligned} u_1(z) =&F(\alpha ,\beta ,\gamma ; z), \qquad \qquad u_2(z)= \sqrt{z}, \\ {\widetilde{u}}_1(z) =&F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z), \qquad \qquad {\widetilde{u}}_2(z)= \frac{1}{\sqrt{z}}, \end{aligned}$$

and around singularity \(z=1\)

$$\begin{aligned} v_1(z) =&\sqrt{z}, \\ v_2(z)=&(1-z)^{-\tfrac{1}{k}}F\left( 1,-\tfrac{1}{2}-\tfrac{1}{k};1-\tfrac{1}{k};1-z\right) ,\\ {{\widetilde{v}}}_1(z) =&\frac{1}{\sqrt{z}}, \\ {{\widetilde{v}}}_2(z)=&(1-z)^{\tfrac{1}{k}} F\left( 1,\tfrac{1}{2}+\tfrac{1}{k};1+\tfrac{1}{k};1-z\right) , \end{aligned}$$

respectively. We will check how these local solutions change during analytical continuation along certain closed contours encircling singularities which give local monodromies contained in the differential Galois group. Notions of analytical continuation and local and global monodromy and their calculations for the hypergeometric equation are presented in Appendix.

Fig. 1
figure 1

Loops around singularities for the monodromy calculations

For further considerations we will need the commutator loop \( \rho _1 = \sigma _0\sigma _1\sigma _0^{-1}\sigma _1^{-1}\), where \( \sigma _0\) and \(\sigma _1\) denote loops with one common point \(z_0\) encircling counter-clockwise singularities \(z=0\) and \(z=1\), respectively, see Fig. 1, and the corresponding monodromy matrix is

$$\begin{aligned} C := M_{\rho _1}= M_{\sigma _0} M_{\sigma _1} M_{\sigma _0}^{-1} M_{\sigma _1}^{-1}. \end{aligned}$$
(41)

The loop around the infinity \( \sigma _{\infty } \) is chosen such that \( \sigma _0\sigma _1\sigma _{\infty }={\mathrm {Id}}\) and then \( M_{\sigma _0} M_{\sigma _1}M_{\sigma _\infty }={\mathrm {Id}}\). We will also need the following commutator

$$\begin{aligned} D:= M_{\rho _{\infty }}= M_{\sigma _0} M_{\sigma _\infty } M_{\sigma _0}^{-1} M_{\sigma _\infty }^{-1}. \end{aligned}$$
(42)

For the sets of parameters (39) and (40) the respective commutator matrices are denoted by C, \({{\widetilde{C}}}\) and D, \( {{\widetilde{D}}} \). All of them are unipotent and upper triangular, thus they have the forms

$$\begin{aligned} C=\begin{bmatrix} 1 &{} c_{12} \\ 0 &{} 1 \end{bmatrix},\,\, {{\widetilde{C}}}=\begin{bmatrix} 1 &{} {{\widetilde{c}}}_{12} \\ 0 &{} 1 \end{bmatrix},\,\, D=\begin{bmatrix} 1 &{} d_{12} \\ 0 &{} 1 \end{bmatrix},\,\, {{\widetilde{D}}}=\begin{bmatrix} 1 &{} {{\widetilde{d}}}_{12} \\ 0 &{} 1 \end{bmatrix}, \end{aligned}$$

with non-trivial entries

$$\begin{aligned} \begin{aligned} c_{12}=&- \mathrm {e}^{-\frac{2 \mathrm {i}\pi }{k}}d_{12}, \quad {\widetilde{c}}_{12}= -\frac{\sqrt{\pi } \left( 1-\mathrm {e}^{\frac{2 \mathrm {i}\pi }{k}}\right) \Gamma \left( \frac{1}{k}\right) }{\Gamma \left( \frac{1}{2}+\frac{1}{k}\right) }, \\ d_{12}=&-\frac{2 \sqrt{\pi } \left( 1-e^{\frac{2 i \pi }{k}}\right) \Gamma \left( -\frac{1}{k}\right) }{\Gamma \left( -\frac{k+2}{2 k}\right) }, \,\,\, {{\widetilde{d}}_{12}} = - e^{-\frac{2 i \pi }{k}} {{\widetilde{c}}}_{12}. \\ \end{aligned} \end{aligned}$$
(43)

Lemma 2

If \( |k |>2 \), then for arbitrary \( \lambda \in \mathbb {C}\), function g(z) is not algebraic.

Proof

If \(|k |>2 \), then neither \( F(\alpha ,\beta ,\gamma ; z) \) nor \(F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z) \) is algebraic. In fact, for \( F(\alpha ,\beta ,\gamma ; z) \) the monodromy element C is not of finite order. By the similar reason \(F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z)\) is not algebraic. Let us assume that g(z) is algebraic. Then, taking all its analytical continuations along loops with the same base point, we obtain infinitely many values of g(z).

The analytical continuations of \( F(\alpha ,\beta ,\gamma ; z) \) and \(F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z)\) along loop \(\rho _1\) give

$$\begin{aligned} \begin{aligned} {{\mathcal {M}}}_{\rho _1}( F(\alpha ,\beta ,\gamma ; z)) =&F(\alpha ,\beta ,\gamma ; z) +c_{12} \sqrt{z},\\ {{\mathcal {M}}}_{\rho _1}(F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z))=&F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z) +{\widetilde{c}}_{12}\frac{1}{\sqrt{z}}, \end{aligned} \end{aligned}$$
(44)

where \(c_{12}\) and \({{\widetilde{c}}}_{12}\) are given in (43). Hence, we have

$$\begin{aligned} \begin{aligned} {{\mathcal {M}}}_{\rho _1}(g(z))&= \lambda {{\mathcal {M}}}_{\rho _1}( F(\alpha ,\beta ,\gamma ; z))\\&+ z {{\mathcal {M}}}_{\rho _1}(F({{\widetilde{\alpha }}},{{\widetilde{\beta }}},{{\widetilde{\gamma }}}; z)) = g(z) +\sqrt{z}\Delta _1, \end{aligned} \end{aligned}$$
(45)

where

$$\begin{aligned} \Delta _1 := \lambda c_{12} + {{\widetilde{c}}}_{12}. \end{aligned}$$
(46)

If \( \Delta _1\ne 0 \), then analytical continuation along loops \( \rho _1^{n} \) gives infinitely many values of g(z) as we have

$$\begin{aligned} {{\mathcal {M}}}_{\rho _1^n}(g(z)) = g(z) +n\sqrt{z}\Delta _1, \quad n \in \mathbb {Z}. \end{aligned}$$
(47)

Thus, if g(z) is algebraic, then \( \Delta _1=0 \). But this one condition is not sufficient. In fact, \( \Delta _1=0 \) for \(\lambda = -{{\widetilde{c}}}_{12}/ c_{12}\). This is why we should consider the commutator D. Similar reasoning with analytical continuation along loop \( \rho _{\infty } \) gives \(M_{\rho _\infty }(g(z)) = g(z) +\sqrt{z}\Delta _\infty \), where \( \Delta _{\infty }:= \lambda d_{12} + {{\widetilde{d}}}_{12},\) with \(d_{12}\) and \({{\widetilde{d}}}_{12}\) given in (43) and

$$\begin{aligned} {{\mathcal {M}}}_{\rho _\infty ^n}(g(z)) = g(z) +n\sqrt{z}\Delta _\infty , \quad n \in \mathbb {Z}. \end{aligned}$$
(48)

Thus, if g(z) is algebraic, then \( \Delta _1=0\) and \(\Delta _{\infty }=0 \), that is, there exists \(\lambda \in \mathbb {C}\) such that

$$\begin{aligned} \lambda c_{12} + {{\widetilde{c}}}_{12} =0 , \qquad \lambda d_{12} + {{\widetilde{d}}}_{12} = 0. \end{aligned}$$

It is possible if and only if the following determinant

$$\begin{aligned} \det \begin{bmatrix} c_{12} &{} {{\widetilde{c}}}_{12} \\ d_{12} &{} {{\widetilde{d}}}_{12} \end{bmatrix}&= \nonumber \\&32\pi \mathrm {i}\sin \left( \frac{\pi }{k} \right) ^3 \cos \left( \frac{\pi }{k} \right) \tfrac{\Gamma \left( \frac{1}{k}\right) \Gamma \left( -\frac{1}{k}\right) }{\Gamma \left( \frac{k+2}{2 k}\right) \Gamma \left( -\frac{k+2}{2 k}\right) } \nonumber \\&= 4\pi \mathrm {i}(k+2) \sin \left( \frac{2\pi }{k} \right) ^2 \end{aligned}$$
(49)

vanishes. In these calculations we used well-known formula \(\Gamma (x)\Gamma (-x)=-\tfrac{\pi }{x\sin (\pi x)}\). However, by assumption \( |k |>2 \), so the determinant (49) is different from zero. This contradiction finishes the proof. \(\square \)

6 Proofs

Proof of Theorem 1 follows from Lemmas 1 and 2. In fact, let us assume that the system generated by the Hamiltonian \( K_0 \) given in (5) admits an additional first integral, which is a rational function of variables \( (u_1,u_2, v_1,v_2) \). In Sect. 3, we showed that it admits a non-equilibrium particular solution. Moreover, by Lemmas 1 and  2, the identity component of the differential Galois group of the variational equations along this particular solution is not Abelian. Thus, by Theorem 3, the system is not integrable. This contradiction finishes the proof.

For the proof of Theorem 2, we need the following lemma.

Lemma 3

Assume that the system given by the following Hamiltonian

$$\begin{aligned} {\widetilde{H}}(\varvec{q},\varvec{p},\alpha )={\widetilde{H}}_{0}(\varvec{q},\varvec{p}) - \alpha {\widetilde{G}}(\varvec{q},\varvec{p}), \end{aligned}$$
(50)

where \(\alpha \) is a parameter, admits a rational first integral \({\widetilde{J}}(\varvec{q},\varvec{p},\alpha )\) functionally independent with \({\widetilde{H}}(\varvec{q},\varvec{p},\alpha )\). Then, the system given by Hamiltonian

$$\begin{aligned} H(\varvec{q},\varvec{p},h)=\frac{1}{{\widetilde{G}}(\varvec{q},\varvec{p})}\left( {\widetilde{H}}_{0}(\varvec{q},\varvec{p}) -h \right) \end{aligned}$$
(51)

admits a rational first integral

$$\begin{aligned} J(\varvec{q},\varvec{p},h):={\widetilde{J}}\Big (\varvec{q},\varvec{p},\frac{h-{\widetilde{H}}_{0}(\varvec{q},\varvec{p})}{{\widetilde{G}}(\varvec{q},\varvec{p})}\Big ), \end{aligned}$$

functionally independent with \(H(\varvec{q},\varvec{p},h)\).

The transformation in this lemma, called the coupling constant metamorphosis, changes the role of the coupling constant \(\alpha \) and the energy \(h={\widetilde{H}}(\varvec{q},\varvec{p},\alpha )\). The new Hamiltonian \(H(\varvec{q},\varvec{p},h)\) is equal to \(\alpha \) obtained from equation \(h={\widetilde{H}}(\varvec{q},\varvec{p},\alpha )\). For details and proof, see [12, 26, 29].

Now, we pass to the proof of Theorem 2. Let us assume the system given by Hamiltonian (6) satisfying assumptions of this theorem is integrable with a first integral \(I(\varvec{q},\varvec{p},\mu )\). Then, the system given by Hamiltonian \({\widetilde{K}}_{\mu }(\varvec{u},\varvec{v}) \), see (8), is also integrable with corresponding first integral \({\widetilde{J}}_{\mu }(\varvec{u},\varvec{v})\). Note that we can set \({\widetilde{K}}(\varvec{u},\varvec{v}, \alpha )={\widetilde{K}}_{\mu }(\varvec{u},\varvec{v}) \) with \(\alpha =4\mu \), and then we can write

$$\begin{aligned} {\widetilde{K}}(\varvec{u},\varvec{v},\alpha ) = {\widetilde{K}}_0(\varvec{u},\varvec{v}) - \alpha {\widetilde{G}}(\varvec{u},\varvec{v}), \end{aligned}$$
(52)

where

$$\begin{aligned} \begin{aligned} {\widetilde{K}}_0(\varvec{u},\varvec{v})&=\frac{v_1^2+v_2^2}{8(u_1^2+u_2^2)}+ \frac{1}{2}\omega (u_2v_1-u_1v_2)\\&\quad + W(u_1,u_2), \quad {\widetilde{G}}(\varvec{u},\varvec{v})= \frac{1}{4(u_1^2+u_2^2)}. \end{aligned} \end{aligned}$$
(53)

Hence, we can apply Lemma 3 to this system. By this Lemma, the system given by Hamiltonian

$$\begin{aligned} \begin{aligned} K(\varvec{u},\varvec{v},&\alpha ) = \frac{1}{{\widetilde{G}}(\varvec{v},\varvec{v})}\left( {\widetilde{K}}_{0}(\varvec{u},\varvec{v}) -h \right) \\&= 4(u_1^2+u_2^2)({{\widetilde{K}}}_\mu (\varvec{u},\varvec{v})-h)=K_{\mu }(\varvec{u},\varvec{v}), \end{aligned} \end{aligned}$$
(54)

is integrable. However, \(K_{\mu }=K_0-4\mu \) and we proved that \(K_0\) is not integrable according to Theorem 1. This contradiction finishes our proof.

7 An example

In Sect. 1, we mentioned that non-integrability of regularized system (5) does not imply that the original system (2) is not integrable. Here we consider Hamiltonian (2) with potential

$$\begin{aligned} V(q_1,q_2)=V_{l,k}(q_1,q_2):=(q_2-\mathrm {i}q_1)^{k-l}(q_2+\mathrm {i}q_1)^{l},\nonumber \\ \end{aligned}$$
(55)

where k and l are integers, \( k>2 \) and \( 0\le l \le k \). Potentials of this form are called exceptional potentials, see, e.g. [17]. It is known, see [7], that if \( \omega =0 \), then polynomial potentials \(V_{l,k}(q_1,q_2) \) with \( l\in \{0,1,k-1,k\}\), \(l=k/2\) for even k, and \((l,k)=(2,7)\) or \((l,k)=(5,7)\) are integrable with additional polynomial first integrals.

Among exceptional potentials, we have radial potentials and for them, we have obvious result.

Proposition 2

For arbitrary \( \omega \in \mathbb {C}\), and \(m\in \mathbb {Q}\) Hamiltonian system governed by

$$\begin{aligned} H=\frac{1}{2}(p_1^2+p_2^2)+\omega (p_1q_2-p_2q_1)+ (q_1^2+q_2^2)^m,\nonumber \\ \end{aligned}$$
(56)

is integrable with the additional first integral \( I=p_1 q_2 -p_2q_1.\)

We remark that at points \((q_1,q_2)=(\pm 1,\mathrm {i})\) the radial potential \(V(q_1,q_2)= (q_1^2+q_2^2)^m\) either vanishes or has algebraic poles, so our theorems do not apply.

A more interesting is the polynomial potential \( V= V_{0,k}=(q_2-\mathrm {i}q_1)^{k} \) for an integer k and the corresponding Hamilton’s function is

$$\begin{aligned} H_0=\frac{1}{2}(p_1^2+p_2^2)+\omega (p_1q_2-p_2q_1)+ (q_2-\mathrm {i}q_1)^k. \nonumber \\ \end{aligned}$$
(57)

When working with this potential it is convenient to use canonical variables \( (x_1,x_2,y_1,y_2) \) defined as

$$\begin{aligned} \begin{aligned} x_1&=q_2-\mathrm {i}q_1,\quad x_2=q_2+\mathrm {i}q_1, \\ y_1&=\frac{1}{2}(p_2+\mathrm {i}p_1),\quad y_2=\frac{1}{2}(p_2-\mathrm {i}p_1). \end{aligned} \end{aligned}$$
(58)

In these variables, the Hamiltonian of the system reads

$$\begin{aligned} H_0(\varvec{x},\varvec{y}) = 2y_1y_2 +\mathrm {i}\omega (x_2y_2-x_1y_1) + x_1^k. \end{aligned}$$
(59)

Now it is easy to show the following proposition.

Proposition 3

For arbitrary \( \omega \in \mathbb {C}\), and an integer \( k\ne -1 \), the Hamiltonian system generated by (59) is super-integrable. If \( \omega =0 \), then it admits two rational additional first integrals

$$\begin{aligned} I_1=y_2, \quad I_2 =y_2 \left( x_1y_1-x_2y_2 \right) +\frac{k}{2(k+1)}x_1^{k+1}.\nonumber \\ \end{aligned}$$
(60)

If \( \omega \ne 0 \), then it admits two, generally non-algebraic, first integrals

$$\begin{aligned} J_1 = 2 y_2\exp \left[ \frac{\mathrm {i}\omega x_1}{2 y_2} \right] , \end{aligned}$$
(61)

and

$$\begin{aligned} J_2=\mathrm {i}\omega \left( x_1y_1-x_2y_2 \right) - \left( \frac{J_1}{\mathrm {i}k\omega }\right) ^k \Gamma \left( k+1, \tfrac{\mathrm {i}k \omega x_1}{2 y_2}\right) ,\nonumber \\ \end{aligned}$$
(62)

where \( \Gamma (s,x) \) is the incomplete gamma function.

Proof

The equations of motion have the form

$$\begin{aligned} \begin{aligned}&\dot{x}_1= 2y_2-\mathrm {i}\omega x_1,\quad \dot{y}_1= \mathrm {i}\omega y_1-kx_1^{k-1}, \\&\dot{x}_2= 2y_1+\mathrm {i}\omega x_2,\quad \dot{y}_2=- \mathrm {i}\omega y_2. \end{aligned} \end{aligned}$$
(63)

Now, if \( \omega =0 \), then it is easy to check that \( I_1 \) and \( I_2 \) given by (60) are first integrals.

In order to deduce the form of first integrals in the case \( \omega \ne 0 \), we notice that system (63) can be solved explicitly. In fact, first we solve equation for \( y_2 \)

$$\begin{aligned} y_2(t)=C_1e^{-\mathrm {i}\omega t}. \end{aligned}$$
(64)

Next, using the variation of constants method we find solution for \( x_1 \)

$$\begin{aligned} x_1(t)=(2C_1t +C_2)e^{-\mathrm {i}\omega t}. \end{aligned}$$
(65)

As the system is invariant with respect to the time translation we can put \( C_2=0 \) without a loss of the generality. Hence, \( t=\tfrac{x_1}{2y_2} \). Then, from the above two equalities, we find that \( 2C_1 = J_1 \) is a first integral given by (61).

Knowing \( x_1 \) we find, again using the variation of constants method, that the solution for \( y_1 \) is

$$\begin{aligned} y_1(t)=C_3e^{\mathrm{i\omega t}}+ (2C_1)^{k-1}(\mathrm {i}\omega )^{-k}\Gamma \left( k,\mathrm {i}k \omega t\right) e^{\mathrm {i}\omega t}. \end{aligned}$$
(66)

Finally, by the same method we find

$$\begin{aligned} x_2(t)= & {} (2C_3t+C_4)e^{\mathrm{i \omega t}}\nonumber \\&+ 2\frac{(2\mathrm {i}C_1)^{k-1}}{k\omega ^{k+1}} \Big [\Gamma \left( k+1,\mathrm {i}k \omega t\right) \nonumber \\&- \mathrm {i}k \omega t\Gamma \left( k,\mathrm {i}k \omega t\right) \Big ] e^{\mathrm{i \omega t}}. \end{aligned}$$
(67)

Let \( L=x_1y_1-x_2y_2 \). It is easy to check that \(\dot{L}= - k x_1^k\). As, by formula (65), \(x_1(t) = J_1t \exp [-\mathrm {i}\omega t]\), we have

$$\begin{aligned} \frac{\mathrm {d}L}{\mathrm {d}t} = - k x_1^k = -k(J_1 t)^k\mathrm {e}^{-\mathrm {i}\omega k t}. \end{aligned}$$
(68)

By the direct integration, we obtain

$$\begin{aligned} L = \frac{1}{\mathrm {i}\omega }\left( \frac{J_1}{\mathrm {i}k \omega } \right) ^k \Gamma (k+1, \mathrm {i}k \omega t) + \frac{C}{\mathrm {i}\omega }, \end{aligned}$$
(69)

where C is a constant of integration. Putting \( t=\tfrac{x_1}{2y_2} \) in the above equality we find

$$\begin{aligned} C= & {} J_2 = \mathrm {i}\omega (x_1y_1-x_2y_2) \nonumber \\&-\left( \frac{J_1}{\mathrm {i}k \omega } \right) ^k \Gamma \left( k+1, \tfrac{\mathrm {i}k \omega x_1}{2y_2}\right) , \end{aligned}$$
(70)

and this finishes our proof. \(\square \)

Remark 1

For an integer \( k>0 \) integral (62) is a polynomial. In fact, if k is a positive integer, then it holds

$$\begin{aligned} \mathrm {e}^x\Gamma (k+1,x) = k! \sum _{j=0}^k \frac{x^j}{j!}, \end{aligned}$$
(71)

see, e.g. [20, p. 339]. Hence, we obtain

$$\begin{aligned}&\left( \frac{2y_2}{\mathrm {i}k \omega }\right) ^k\exp \left[ \frac{\mathrm {i}k \omega x_1}{2 y_2} \right] \Gamma \left( k+1, \tfrac{\mathrm {i}k \omega x_1}{2 y_2}\right) \nonumber \\&\quad = k! \sum _{j=0}\frac{1}{j!}\left( \frac{2y_2}{\mathrm {i}k \omega }\right) ^{k-j} x_1^j. \end{aligned}$$
(72)

Remark 2

One can immediately note that system (63) has the linear Darboux polynomial \( F_1=y_2 \) with the constant cofactor \( P_1=- \mathrm {i}\omega \). Moreover, one can check that it also has the exponential factor \( F_2=\exp \left( \frac{\mathrm {i}\omega x_1}{2y_2}\right) \) with the cofactor \( P_2=\mathrm {i}\omega \). For an explanation of the notion of an exponential factor see, e.g. [5]. Because \( P_1+P_2=0 \), thus this system has a first integral of the form \(F_1F_2\) which is just (61). This first integral was found by T. Stachowiak using the reasoning described above.

Now, we investigate properties of the regularized Hamiltonian (57). After the canonical Levi-Civita transformation (7) we obtain new Hamiltonian

$$\begin{aligned} {\widetilde{K}}_0=\frac{1}{8}\frac{v_1^2+v_2^2}{u_1^2+u_2^2 }+ 2\omega \left( u_2v_1-u_1v_2\right) W\left( u_1, u_2\right) ,\nonumber \\ \end{aligned}$$
(73)

where

$$\begin{aligned} W(u_1, u_2)=(-\mathrm {i})^k(u_1+\mathrm {i}u_2)^{2k}. \end{aligned}$$
(74)

Of course, it has two functionally independent first integrals \({\widetilde{J}}_1(\varvec{u}, \varvec{v})\) and \({\widetilde{J}}_2(\varvec{u}, \varvec{v})\) which are just first integrals \(J_1(\varvec{x},\varvec{y})\) and \(J_2(\varvec{x},\varvec{y})\) after substitution

$$\begin{aligned} x_1&= -\mathrm {i}\left( u_1 +\mathrm {i}u_2 \right) ^2 ,&x_2&= \mathrm {i}\left( u_1 -\mathrm {i}u_2 \right) ^2, \end{aligned}$$
(75)
$$\begin{aligned} y_1&= \frac{\mathrm {i}}{4} \frac{ v_1 -\mathrm {i}v_2}{ u_1 +\mathrm {i}u_2},&y_2&= -\frac{\mathrm {i}}{4} \frac{ v_1 +\mathrm {i}v_2}{ u_1 -\mathrm {i}u_2}. \end{aligned}$$
(76)

The regularized Hamiltonian reads

$$\begin{aligned} \begin{aligned} K_0=&\frac{1}{2}\left( v_1^2+v_2^2\right) + 2\omega (u_1^2+u_2^2)\left( u_2v_1-u_1v_2\right) \\&+4(u_1^2+u_2^2) \left[ W\left( u_1, u_2\right) -h\right] . \end{aligned} \end{aligned}$$
(77)

As in the considered case \( V(q_1,q_2)=(q_2-\mathrm {i}q_1)^{k} \), we have \(V(-1,\mathrm {i})=(2\mathrm {i})^k\ne 0\). Thus, by Theorem 1, the regularized system is not integrable. However, on the level \(K_0(\varvec{u},\varvec{v})=0\) it is super-integrable with \({\widetilde{J}}_1(\varvec{u}, \varvec{v})\) and \({\widetilde{J}}_2(\varvec{u}, \varvec{v})\) as first integrals.