1 Introduction

Linear, ordinary, autonomous systems of differential equations are celebrated for their analytical integrability which makes them excellent tools in almost all fields of science. This solvability breaks down when entering the realm of non-linear ordinary differential equation systems where no general solution method exists. Integrable non-linear systems are found only in the minority of cases including for example linearizable planar systems [1, 2] or polynomial systems with isochronous centers [3, 4]. In most cases, however, non-linear systems are handled with sophisticated numerical methods such as Runge–Kutta schemes [5] or other advanced techniques [6].

Despite of the great accuracy of numerical methods, it is desirable to find analytical techniques to solve non-linear systems [7]. Beside the inherent beauty of exact solutions, they can also play the role of benchmark for numerical methods. In this work, we focus on first-order, quadratic, autonomous differential equation systems (QDEs) which emerge in various research fields including models of population dynamics [8, 9], fluid dynamics [10], control systems [11] and even quantum dynamics [12]. Classification of two-variable quadratic systems has been studied extensively [13,14,15,16,17,18].

There exists no general method to find analytical solution of QDEs. Except for the special case, when the number of variables is just one. By denoting this variable with x(t), the quadratic differential equation is written as

$$\begin{aligned} {\dot{x}}=ax^2 + vx \end{aligned}$$
(1)

with \(a\ne 0\) and v being constants. An additional constant term on the right-handside could be eliminated by shifting x(t). By introducing \(y(t) = x(t)^{-1}\), the equation is transformed to the linear equation of

$$\begin{aligned} {\dot{y}}=-a-vy \end{aligned}$$
(2)

and, hence, the quadratic differential equation (1) becomes exactly solvable.

In this paper, we investigate a class of QDEs with more than one variable which can be solved exactly by transforming the QDE to a linear differential equation system (LDE). The transformation is realized through a multi-variable extension of \(y(t) = x(t)^{-1}\), similar to the generalized spherical inversions presented in Refs. [19, 20].

2 Generalized Inversion Transformation

Let us consider some differentiable functions \(x_1(t)\), \(x_2(t), \ldots , x_n(t)\) whose dynamics is governed by the QDE

$$\begin{aligned} {\dot{x}}_i = {\textbf{x}}^{T} {\textbf{A}}_i {\textbf{x}}+ {\textbf{v}}_i^{T}{\textbf{x}} \end{aligned}$$
(3)

where \({\textbf{A}}_i\) are \(n\times n\) matrices which are chosen to be symmetric and \({\textbf{v}}_i^T\) are row vectors of n components. In the equation, \({\textbf{x}}(t)\) is the column vector containing the functions \(x_1(t)\), \(x_2(t), \ldots , x_n(t)\) consecutively. The entries of \({\textbf{A}}_i\) and \({\textbf{v}}_i\) are independent from t and are assumed to be real without loss of generality.Footnote 1 Our goal is to determine if the variables \(x_i(t)\) can be transformed with a multivariable variant of the inversion in such a way that the QDE (3) is transformed to an LDE. The multivariable inversion is defined as

$$\begin{aligned} y_i(t) = \frac{x_i(t)}{{\textbf{x}}(t)^T{\textbf{B}}{\textbf{x}}(t)} \end{aligned}$$
(4)

for some symmetric matrix \({\textbf{B}}\) not depending on time. Note that we use the same \({\textbf{B}}\) matrix for all i. This ensures that the transformation is easily inverted by

$$\begin{aligned} x_i(t) = \frac{y_i(t)}{{\textbf{y}}(t)^T{\textbf{B}}{\textbf{y}}(t)}. \end{aligned}$$
(5)

The transformations (4) and (5) will henceforth be referred to as B-transformation and inverse B-transformation, respectively. The variables \(y_i(t)\) are expected to obey the linear differential equations

$$\begin{aligned} \dot{y_i} = {\textbf{m}}_i^T{\textbf{y}}+ w_i \end{aligned}$$
(6)

where \({\textbf{m}}_i^T\) are time-independent row vectors and \(w_i\) are scalars. Since Eq. (6) is exactly solvable for arbitrary initial conditions, the QDE can also be solved by applying the inverse B-transformation (5). The goal of the present paper is to provide a procedure based on which one can decide whether a given QDE, defined through \({\textbf{A}}_i\) and \({\textbf{v}}_i\), can be B-transformed to an LDE.

To start our study, let us calculate the time derivative of Eq. (5)

$$\begin{aligned} {\dot{x}}_i=\frac{{\dot{y}}_i}{{\textbf{y}}^T{\textbf{B}}{\textbf{y}}} - \frac{y_i}{\left( {\textbf{y}}^T{\textbf{B}}{\textbf{y}}\right) ^2}\big ({\dot{{\textbf{y}}}}^T{\textbf{B}}{\textbf{y}}+ {\textbf{y}}^T{\textbf{B}}{\dot{{\textbf{y}}}}\big ). \end{aligned}$$
(7)

Substituting the LDE differential equations (6) and then transforming all \(y_i\) variables back to \(x_i\) by means of Eq. (4), we obtain

$$\begin{aligned} {\dot{x}}_i = {\textbf{m}}_i^{T}{\textbf{x}}+ w_i\left( {\textbf{x}}^T{\textbf{B}}{\textbf{x}}\right) - {\textbf{x}}^T{\textbf{B}}{\textbf{w}}x_i -x_i{\textbf{w}}^T{\textbf{B}}{\textbf{x}}- x_i\frac{{\textbf{x}}^T\left( {\textbf{M}}^T{\textbf{B}}+{\textbf{B}}{\textbf{M}}\right) {\textbf{x}}}{{\textbf{x}}^T{\textbf{B}}{\textbf{x}}} \end{aligned}$$
(8)

with the matrix \({\textbf{M}}\) built from the row vectors \({\textbf{m}}_i^T\). In the formula, \({\textbf{w}}\) is the column vector consisting of the constants \(w_i\).

By comparing Eq. (8) with the original QDE (3), one can note that Eq. (8) is not necessarily a quadratic differential equation. The last term has an \({\textbf{x}}\)-dependent denominator and, hence, is neither quadratic nor linear in general. This term scales with \({\textbf{x}}\) as a linear term but becomes an actual linear term only if

$$\begin{aligned} {\textbf{M}}^T{\textbf{B}}+ {\textbf{B}}{\textbf{M}}= -\lambda {\textbf{B}} \end{aligned}$$
(9)

holds with some scalar \(-\lambda \). The minus sign is chosen for later convenience. The condition (9) means that \({\textbf{B}}\) must be the eigenmatrix [21] of \({\textbf{M}}\) with the eigenvalue of \(-\lambda \). In this case, the differential equations (8) become

$$\begin{aligned} {\dot{x}}_i = \left( {\textbf{m}}_i^{T} +\lambda {\textbf{e}}_i^T\right) {\textbf{x}}+ {\textbf{x}}^T\left( w_i {\textbf{B}}-{\textbf{B}}\left( {\textbf{w}}\circ {\textbf{e}}_i^T\right) - \left( {\textbf{e}}_i\circ {\textbf{w}}^T\right) {\textbf{B}}\right) {\textbf{x}} \end{aligned}$$
(10)

where \(\circ \) denotes dyadic product and \({\textbf{e}}_i^T\) is the unit row vector with 1 in the ith entry and zero otherwise. The equation is already a quadratic differential equation containing quadratic and linear terms in \({\textbf{x}}\).

Comparing the linear terms of Eqs. (10) and (3), we find

$$\begin{aligned} {\textbf{V}}= {\textbf{M}}+ \lambda {\textbf{I}} \end{aligned}$$
(11)

where \({\textbf{V}}\) is the \(n\times n\) matrix whose rows are the row vectors \({\textbf{v}}_i^T\) and \({\textbf{I}}\) is the \(n\times n\) identity matrix. Substituting (11) into the condition (9), we obtain

$$\begin{aligned} {\textbf{V}}^T{\textbf{B}}+ {\textbf{B}}{\textbf{V}}= \lambda {\textbf{B}} \end{aligned}$$
(12)

indicating that the matrix \({\textbf{B}}\) must be a symmetric eigenmatrix of \({\textbf{V}}\) as well.

Beside the linear terms, the quadratic terms of Eq. (10) and (3) must also equal. Since both \({\textbf{A}}_i\) and the kernel in the quadratic terms in Eq. (10) are symmetric, they must equal

$$\begin{aligned} {\textbf{A}}_i = w_i{\textbf{B}}- {\textbf{B}}\left( {\textbf{w}}\circ {\textbf{e}}_i^T\right) - \left( {\textbf{e}}_i\circ {\textbf{w}}^T\right) {\textbf{B}} \end{aligned}$$
(13)

for all i. Note that the second and third terms are matrices full of zeros except for the ith row and ith column, respectively. Therefore, omitting the ith row and column in the equation leads to the proportionality condition

$$\begin{aligned} {\tilde{{\textbf{A}}}}_i^i = w_i {\tilde{{\textbf{B}}}}^i, \end{aligned}$$
(14)

where \({\tilde{{\textbf{A}}}}_i^i\) (\({\tilde{{\textbf{B}}}}^i\)) is the \((n-1)\times (n-1)\) matrix which is obtained from \({\textbf{A}}_i\) (\({\textbf{B}}\)) by skipping the ith row and column. For the ith column of Eq. (13), we have

$$\begin{aligned} {\textbf{a}}_i^i \equiv {\textbf{A}}_i{\textbf{e}}_i = w_i {\textbf{B}}{\textbf{e}}_i -{\textbf{B}}{\textbf{w}}- \left( {\textbf{e}}_i^T{\textbf{B}}{\textbf{w}}\right) {\textbf{e}}_i. \end{aligned}$$
(15)

To summarize, the QDE as given in Eq. (3) can be transformed with a B-transformation to an LDE if we find a symmetric matrix \({\textbf{B}}\) fulfilling Eq. (12) with some constant \(\lambda \) and, at the same time, obeying Eqs. (14) and (15) with some constants \(w_i\). The linear terms of the LDE can be calculated by expressing \({\textbf{V}}\) from Eq. (11).

3 Algorithm

Let us provide a detailed description of the procedure based on the findings of the previous section. The starting point of the algorithm is Eq. (3) determined by the matrices \({\textbf{A}}_i\) and vectors \({\textbf{v}}_i\).

  1. 1.

    As a first step, one has to build up the matrix \({\textbf{V}}\) from the row vectors \({\textbf{v}}_i^T\) and find all symmetric eigenmatrices based on \({\textbf{V}}^T {\textbf{B}}+ {\textbf{B}}{\textbf{V}}= \lambda {\textbf{B}}\). Note that the eigenmatrices are definite only up to an overall multiplication factor similarly to the case of usual eigenvectors.

    To obtain the eigenmatrices, one may calculate the vector-eigenvalues \(s_j\) of \({\textbf{V}}^T\) and the corresponding right-handside eigenvectors \({\textbf{r}}_j\), i.e., \( {\textbf{V}}^T{\textbf{r}}_{j} = s_j{\textbf{r}}_{j}\). The sum of the geometric multiplicity of all eigenvalues is denoted by N. The matrix-eigenvalues of \({\textbf{V}}\) are given by \(s_1 + s_1\), \(s_1+s_2, \ldots , s_1 + s_n\), \(s_2+s_2\), \(s_2 + s_3, \ldots ,s_N+s_N\) and the eigenmatrix corresponding to \(s_j + s_m\) is \({\textbf{P}}_{jm}=\left( {\textbf{r}}_{j}\circ {\textbf{r}}_{m}^T + {\textbf{r}}_{m}\circ {\textbf{r}}_{j}^T\right) /2\). Note that the number of symmetric eigenmatrices is \(N(N+1)/2\) which is utmost \(n(n+1)/2\) when n eigenvectors are found.

    In case of degenerate matrix eigenvalues \(\lambda \), all linear combinations within the subspace are potential \({\textbf{B}}\) matrices. For an example, see Sect. 5.

  2. 2.

    For each potential eigenmatrix \({\textbf{B}}\), the proportionality condition (14) must be checked for all i with some constants \(w_i\), which may also be zero. If Eq. (14) cannot be fulfilled with any potential \({\textbf{B}}\) matrix, then the QDE is not solvable by B-transforming to an LDE. If Eq. (14) is satisfied with a \({\textbf{B}}\) matrix and some \(w_i\) constants, one may proceed to Step 3.

  3. 3.

    For the potential pairs of \({\textbf{B}}\) and \({\textbf{w}}\) surviving Step 2, one has to check Eq. (15) for all i. If it holds true for all i, then the QDE (3) can be transformed to an LDE of the form of Eq. (6) where the constants \(w_i\) are the coefficients of proportionality found in Eq. (14) in Step 2 and the vectors \({\textbf{m}}_i^T\) are the row vectors of \({\textbf{M}}= {\textbf{V}}- \lambda {\textbf{I}}\) with \(\lambda \) the eigenvalue corresponding to the eigenmatrix \({\textbf{B}}\).

  4. 4.

    The solution of the QDE can be obtained by solving first the linear equation (6) as

    $$\begin{aligned} {\textbf{y}}(t) = e^{{\textbf{M}}t}{\textbf{y}}_0 + {\textbf{y}}_p(t) \end{aligned}$$
    (16)

    where \({\textbf{y}}_0 = {\textbf{x}}_0/({\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0)\) is the initial condition with \({\textbf{x}}_0 = {\textbf{x}}(t=0)\) and \({\textbf{y}}_p(t) = \left( e^{{\textbf{M}}t}-{\textbf{I}}\right) {\textbf{M}}^{-1}{\textbf{w}}\) if \({\textbf{M}}\) is invertible. If \({\textbf{M}}\) is not invertible, the function \({\textbf{y}}_p(t)\) has linear time-dependence in the nullspace of \({\textbf{M}}\). Using the inverse B transformation, the solution of the QDE is obtained as

    $$\begin{aligned} {\textbf{x}}(t)=\frac{e^{{\textbf{M}}t}{\textbf{x}}_0 + \left( {\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0\right) {\textbf{y}}_p(t)}{e^{-\lambda t} + 2{\textbf{y}}_p(t)^T{\textbf{B}}e^{{\textbf{M}}t}{\textbf{x}}_0 + \left( {\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0\right) \left( {\textbf{y}}_p(t)^T{\textbf{B}}{\textbf{y}}_p(t)\right) } . \end{aligned}$$
    (17)

Before presenting examples, let us make a remark on the region where the B-transformation is undefined, i.e., where \({\textbf{x}}^T{\textbf{B}}{\textbf{x}}= 0\). This region can be any quadratic hypersurface depending on the specific structure of the \({\textbf{B}}\) matrix. The region also involves the nullspace of \({\textbf{B}}\), i.e., the points where \({\textbf{B}}{\textbf{x}}=0\). In Sect. 4, the hypersurface contains two straight lines while in Sect. 5, it consists of two cone-shaped surfaces. This raises the question how the system behaves if the initial condition \({\textbf{x}}_0\) is an element of the region, i.e. \({\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0=0\).

First, one can prove that if the initial condition of the QDE is on the hypersurface, the dynamics is constrained to the hypersurface for the whole time-evolution. To justify this, we define \(b(t) = {\textbf{x}}(t)^T{\textbf{B}}{\textbf{x}}(t)\) whose dynamics is derived from the quadratic equations (3) as \({\dot{b}} = b\left( \lambda - 2{\textbf{w}}^T{\textbf{B}}{\textbf{x}}\right) \). If the initial condition is on the hypersurface, i.e., \(b(0)=0\), then the differential equation solves to \(b(t)=0\).

Second, it can also be shown that inside the hypersurface, the quadratic differential equation is analytically solvable and the solution is obtained simply by taking \({\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0=0\) in Eq. (17) which reads as

$$\begin{aligned} {\textbf{x}}(t)=\frac{e^{{\textbf{M}}t}{\textbf{x}}_0}{e^{-\lambda t} + 2{\textbf{y}}_p(t)^T{\textbf{B}}e^{{\textbf{M}}t}{\textbf{x}}_0}. \end{aligned}$$
(18)

Note that (18) does not obviously solve the quadratic differential equation because it was derived from Eq. (17) which was computed through the B transformation. However, by substituting (18) directly into (3) and taking advantage of \({\textbf{x}}_0^T{\textbf{B}}{\textbf{x}}_0 = 0\) and the properties of \({\textbf{B}}\), one can prove that the QDE is solved indeed by (18). Hence, if a QDE is exactly solvable by a B-transformation, the integrability is preserved also in the region where the generalized inversion is undefined.

4 Example, Two-Dimensional Quadratic System

In this section, we consider the quadratic equations of

$$\begin{aligned} {\dot{x}}_1&= -x_1^2 + 4x_1x_2 -x_1 +2x_2 \nonumber \\ {\dot{x}}_2&= x_1^2 +2x_2^2 + x_1 \end{aligned}$$
(19)

from which we read out

(20)

The first step is to obtain the eigenmatrices of \({\textbf{V}}\). The right-handside eigenvectors of \({\textbf{V}}^T\) are given by

(21)

with the eigenvalues of \(s_1 = -2\) and \(s_2 = 1\). The potential \({\textbf{B}}\) matrices are given by

which are non-degenerate eigenmatrices.

The second step is to check the proportionality condition of Eq. (14) and determine the coefficients \(w_i\).

Proportionality check, Eq. (14)

\({\tilde{A}}_1^1 = [0]\)

\({\tilde{A}}_2^2 = [1]\)

\({\textbf{B}}= {\textbf{P}}_{11}\)

\({\tilde{{\textbf{B}}}}^1=[4] \rightarrow w_1 = 0\)

\({\tilde{{\textbf{B}}}}^2=[4] \rightarrow w_2 = \frac{1}{4}\)

\({\textbf{B}}= {\textbf{P}}_{22}\)

\({\tilde{{\textbf{B}}}}^1=[4] \rightarrow w_1 = 0\)

\({\tilde{{\textbf{B}}}}^2=[1] \rightarrow w_2 = 1\)

\({\textbf{B}}= {\textbf{P}}_{12}\)

\({\tilde{{\textbf{B}}}}^1=[-4] \rightarrow w_1 = 0\)

\({\tilde{{\textbf{B}}}}^2=[2] \rightarrow w_2 = \frac{1}{2}\)

The table shows that all eigenmatrices obey the proportionality condition (14). Note that for a two-dimensional QDE, the proportionality check always simplifies to comparison of \(1\times 1\) matrices, which is in most cases trivially fulfilled. For higher dimensions, however, (14) might mean a much stricter condition. For an example, see Sect. 5.

In the example, we continue with the third step by checking Eq. (15) for each i. In the table below, rhs stands for the right-handside of Eq. (15) evaluated with \({\textbf{B}}\) and \({\textbf{w}}\).

figure a

The investigation shows that \({\textbf{a}}_1^1\) and \({\textbf{a}}_2^2\) are reproduced only by \({\textbf{P}}_{12}\).

The result of the algorithm is that the QDE can be transformed to an LDE by \({\textbf{B}}= {\textbf{P}}_{12}\). The linear system is obtained as

(22)

which can be solved analytically for arbitrary initial conditions.

Fig. 1
figure 1

Phase portrait of the quadratic system, Eq. (19). The green dots indicate the fix points of the system. The dashed lines comprise the region where the B transformation is undefined (Color figure online)

The analytical solution of the quadratic system can be given based on Eq. (17). The phase portrait is shown in Fig. 1. The two dashed lines comprise the region where the B transformation is undefined, i.e., where \({\textbf{x}}^T {\textbf{B}}{\textbf{x}}= ({\textbf{x}}^T{\textbf{r}}_1)\cdot ({\textbf{x}}^T{\textbf{r}}_2)=0\).

5 Three-Dimensional Example

In this section, the algorithm is demonstrated in an example with three variables \(x_1(t)\), \(x_2(t)\) and \(x_3(t)\). The quadratic differential equations are given by

(23)

from which we can read out

$$\begin{aligned} {\textbf{A}}_1 =\left[ \begin{array}{ccc} 1 &{}\quad -2 &{}\quad 0 \\ -2 &{}\quad 7 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{array}\right] \qquad {\textbf{A}}_2 =\left[ \begin{array}{ccc} 0 &{}\quad \frac{1}{2} &{}\quad 1 \\ \frac{1}{2} &{}\quad -2 &{}\quad -\frac{7}{2} \\ 1 &{}\quad -\frac{7}{2} &{}\quad 0 \end{array}\right] \qquad {\textbf{A}}_3 =\left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad -1 &{}\quad -2 \\ 0 &{}\quad -2 &{}\quad -7 \end{array}\right] \nonumber \\ \end{aligned}$$
(24)

and

$$\begin{aligned} {\textbf{V}}=\left[ \begin{array}{ccc} 5 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 2 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad -1 \end{array}\right] . \end{aligned}$$
(25)

The eigenvectors of \({\textbf{V}}^T\) are simply obtained as \({\textbf{r}}_i = {\textbf{e}}_i\) with the eigenvalues of \(s_1 = 5\), \(s_2=2\) and \(s_3=-1\). Hence, the eigenmatrices of \({\textbf{V}}\) matrices are as follows:

$$\begin{aligned} {\textbf{P}}_{11}&= \left[ \begin{array}{ccc} 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{array}\right] \qquad&\lambda _{11} = 10 \\ {\textbf{P}}_{22}&= \left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{array}\right] \qquad&\lambda _{22} = 4 \\ {\textbf{P}}_{33}&= \left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 \end{array}\right] \qquad&\lambda _{33} = -2 \\ {\textbf{P}}_{12}&= \left[ \begin{array}{ccc} 0 &{}\quad \frac{1}{2} &{}\quad 0 \\ \frac{1}{2} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{array}\right] \qquad&\lambda _{12} = 7 \\ {\textbf{P}}_{13}&= \left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad \frac{1}{2} \\ 0 &{}\quad 0 &{}\quad 0 \\ \frac{1}{2} &{}\quad 0 &{}\quad 0 \end{array}\right] \qquad&\lambda _{13} = 4 \\ {\textbf{P}}_{23}&= \left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \frac{1}{2} \\ 0 &{}\quad \frac{1}{2} &{}\quad 0 \end{array}\right] \qquad&\lambda _{23} = 1 \end{aligned}$$

Note that the eigenmatrices \({\textbf{P}}_{22}\) and \({\textbf{P}}_{13}\) are degenerate. Therefore, their linear combination

$$\begin{aligned} {\textbf{P}}_4 = {\textbf{P}}_{22} + b{\textbf{P}}_{13} = \left[ \begin{array}{ccc} 0 &{}\quad 0 &{}\quad \frac{b}{2} \\ 0 &{}\quad 1 &{}\quad 0 \\ \frac{b}{2} &{}\quad 0 &{}\quad 0 \end{array}\right] \end{aligned}$$
(26)

is also a potential \({\textbf{B}}\) matrix with an arbitrary value of b.

We continue the algorithm with Step 2, the proportionality check.

figure b

Note that only \({\textbf{P}}_4\) obeys the proportionality check but no restriction on b has been obtained. Hence, in Step 3, we only investigate \({\textbf{P}}_4\).

figure c

It has been found that the quadratic system transformed to a linear system with the matrix

$$\begin{aligned} {\textbf{B}}= \left[ \begin{array}{ccc} 0 &{} 0 &{} \frac{1}{2} \\ 0 &{} 1 &{} 0 \\ \frac{1}{2} &{} 0 &{} 0 \end{array}\right] \end{aligned}$$
(27)

and the resulting linear differential equations are given as

$$\begin{aligned} {\dot{y}}_1= & {} y_1 + 7 \nonumber \\ {\dot{y}}_2= & {} -2y_2 + 2 \nonumber \\ {\dot{y}}_3= & {} -5y_3 - 1 \end{aligned}$$
(28)

which are exactly solvable for arbitrary initial conditions. The analytical solution can be given based on Eq. (17).

Note that the B transformation is undefined in the region where \({\textbf{x}}^T {\textbf{B}}{\textbf{x}}= x_1x_3 + x_2^2=0\). This quadratic equation determines two cone-shaped surfaces touching each other at the origin as shown in Fig. 2.

Fig. 2
figure 2

The hypersurface determined by \({\textbf{x}}^T{\textbf{B}}{\textbf{x}}=0\) consists of two cone-shaped surfaces

6 Example with Complex Eigenvalues

In this example, we demonstrate that the eigenvalues, eigenvectors and eigenmatrices might have complex entries. We study the quadratic system

$$\begin{aligned} {\dot{x}}_1&= -5x_1^2 + 8x_1x_2 +5x_2^2 - 3x_1 - x_2 \nonumber \\ {\dot{x}}_2&= -4x_1^2 -10x_1x_2 + 4 x_2^2 + x_1 - 3 x_2 \end{aligned}$$
(29)

which can be treated using the same method as introduced in Sect. 4. The peculiarity of the present example lies in that the matrix

$$\begin{aligned} {\textbf{V}}^T = \left[ \begin{array}{cc} -3 &{} 1 \\ -1 &{} -3 \end{array}\right] \end{aligned}$$
(30)

has complex eigenvalues. The right-handside eigenvectors of \({\textbf{V}}^T\) are given by

$$\begin{aligned} {\textbf{r}}_1 = \left[ \begin{array}{c} 1 \\ i \end{array}\right] \qquad {\textbf{r}}_2 = \left[ \begin{array}{c} 1 \\ -i \end{array}\right] \end{aligned}$$
(31)

with the eigenvalues of \(s_1 = -3 + i\) and \(s_2 = -3 - i\). The potential \({\textbf{B}}\) matrices are given by

which are again non-degenerate eigenmatrices. By performing the same procedure as in Sect. 4, we find that all potential matrices survive the proportionality check with well-defined \(w_i\) values but only \({\textbf{B}}={\textbf{P}}_{12}\) obeys the \({\textbf{a}}_i^i\) check of Eq. (15) with \(w_1 = 5\) and \(w_2 = -4\).

The linear system is obtained as

$$\begin{aligned} {\dot{y}}_1= & {} 3y_1 - y_2 + 5 \nonumber \\ {\dot{y}}_2= & {} y_1 + 3 y_2 - 4 \end{aligned}$$
(32)

which is exactly solvable to arbitrary initial conditions. Note that due to the complex eigenvalues of \({\textbf{V}}\), the linear system also has complex eigenvalues describing solution paths circumventing the fixpoint.

7 Conclusion

We studied the system of quadratic differential equations of the form of Eq. (3). Although these differential equation systems cannot be solved in general, we have presented a procedure by means of which a subclass of quadratic systems can be solved analytically. The solution is realized through the B-transformation of \({\textbf{y}}= {\textbf{x}}/({\textbf{x}}^T{\textbf{B}}{\textbf{x}})\) which is a multi-dimensional generalization of the inversion \(y= x^{-1}\).

In Sect. 3, we have described the algorithm which allows one to decide whether a quadratic system can be transformed with a B-transformation to a linear differential equation system. If so, the algorithm also yields the linear system which is exactly solvable. In the case of quadratic systems which are not solvable through generalized inversion, such differential equations could be found for example in systems with chaotic features, the method would fail either in Step 2 or in Step 3 of the algorithm. The great advantage of our method is that it can be used to systems with arbitrary number of variables. The algorithm can be used as a first step in analytical calculations handling quadratic systems. Furthermore, the presented method and its inverse allow one to generate exactly solvable QDEs which then could be used to benchmark numerical techniques dealing with nonlinear differential equations.