Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Difference equations or Discrete systems of equations which are in use for the creation of Models in a variety of domains are, in principle, non-linear. On the other hand, most of the existing results refer upon linear systems and various linearization processes are in practice, not always successfully. The reason is that the initial systems possess complexities and due to this fact, basic characteristics do not inherit into linearization. It is therefore a necessity to rethink about linearization processes, their tools and degrees of acceptance for the obtained results. Mathematical Control Theory provides a unifying framework for posing and studying such problems [1, 4]. In this respect, we treat equations or systems known as non-linear discrete systems of polynomial type and deal with non-linearities by using mainly algebraic tools based upon the so-called star-product (cf. section “Preliminaries”). The star-product corresponds to the composition of polynomial functions, in other words to the substitution of one polynomial into another. This star-product allows to describe the evolution of the system along naturally defined operations, the D-operator (cf. section “2D Polynomial Discrete Systems”). This operation is compatible with the cascade connection of one system with another. In a series of papers, problems of evolution and stability of those systems have been studied [3, 5, 8]. In the present note, inspired by similar problems in Control Theory [1, 7], we set down the problem of equivalence of two such systems, in the framework of D-operators, and we look for conditions in order to transform one system into a (sometimes given) equivalent one, with the same future evolution. We deal with the equation F ∗ T = T ∗ G of D-operators, and we are looking for solutions T, when F and G are given (cf. section “T-Similarity”). For a specific system F and when the given system G is a linear one, the problem of Model Complexity arises. It turns out that in this case a notion of complexity could be introduced, which realizes the intrinsic non-linearity of the system. The solution T may be a polynomial operator, a series of operators, a series of series, to be invertible or not and to converge or not. Each one of these situations determines a type of non-linearity complexity for the underlying Model (cf. section “Levels of Model Complexity”). Here are the contents of this work. In the beginning we give the preliminary notion of a D-operator and develop the algebraic tools which allow the transformation of the given equation in an algebraic-like object. After that, we deal with the main object of study, the 2D-Nonlinear Discrete-Polynomial Systems. Initially we define an equivalence relation among D-operations, which turns out to be the appropriate one to characterize the evolution of the underlying systems (Theorem 2). This relation is used to define the notion the T-similarity (Definition 2), between two pairs of sequences and reduce this algebraically to corresponding D-operators (Theorem 2). The determination of the operator T in the equation F ∗ T = T ∗ G, requires a lot of machinery in order to solve the occurring linear-like systems. This is achieved in an algorithmic manner and in each stage of this process, a set of initial conditions should be chosen. Theorem 3.4 ensures that under middle restriction, for a given nonlinear discrete polynomial system the linear T-similarity problem accepts a series-solution. Along the same considerations, a table for the levels of Model Complexity is established. All the above situations are illustrated through some indicative arithmetic examples, which conclude this presentation. Exact proofs as well as applications to specific problems would be given in a forthcoming work [7].

Preliminaries

In this section we shall work with algebraic tools, they will be used later in order nonlinear polynomial discrete systems of dimension two to be described. The cornerstone of our approach is the so-called D-operator. It has been introduced in [5], and transforms a pair of sequences to a pair of sequences. In order to present these ideas in a comprehensive way we shall follow a constructive method, starting from simpler operators and proceeding gradually.

Consider a sequence x(t), t ∈ Z with x(t) = 0, for t < 0. Let us further consider the \(\boldsymbol{\delta }_{\mathbf{i}}\) operator, \(\mathbf{i} = (i_{1},i_{2},\ldots,i_{n})\) be a given vector of integers, named multi-index. This operator defines a new sequence as follows:

$$\displaystyle{\boldsymbol{\delta }_{\mathbf{i}}x(t) = x(t - i_{1})x(t - i_{2})\cdots x(t - i_{n})}$$

If i = i is just a positive integer then \(\boldsymbol{\delta }_{i}x(t) = x(t - i)\), which means that \(\boldsymbol{\delta }_{i}\) coincides with the well-known shift operator. A special case is the operator \(\boldsymbol{\delta }_{0}\), which leaves a sequence unchanged, i.e. \(\boldsymbol{\delta }_{0}x(t) = x(t)\). It is called the identity operator. For the sake of completeness we define by convention that \(\boldsymbol{\delta }_{e}x(t) = 1\). Using this action of the \(\boldsymbol{\delta }\)-operators upon sequences, we can define an external operation among \(\boldsymbol{\delta }\)-operators, named addition, as follows: \((\boldsymbol{\delta }_{\mathbf{i}} +\boldsymbol{\delta } _{\mathbf{j}})x(t) =\boldsymbol{\delta } _{\mathbf{i}}x(t) +\boldsymbol{\delta } _{\mathbf{j}}x(t)\). An internal operation, named star-product, is defined as the composition of two sequences. Indeed, if \(w(t) =\boldsymbol{\delta } _{\mathbf{i}}x(t)\), then \(\boldsymbol{\delta }_{\mathbf{j}} {\ast}\boldsymbol{\delta }_{\mathbf{i}}x(t) =\boldsymbol{\delta } _{\mathbf{j}}w(t) =\boldsymbol{\delta } _{\mathbf{j}}(\boldsymbol{\delta }_{\mathbf{i}}x(t))\). It can be proved [6] that \(\boldsymbol{\delta }_{\mathbf{k}} {\ast} (\boldsymbol{\delta }_{\mathbf{i}} +\boldsymbol{\delta } _{\mathbf{j}})\neq \boldsymbol{\delta }_{\mathbf{k}} {\ast}\boldsymbol{\delta }_{\mathbf{i}} +\boldsymbol{\delta } _{\mathbf{k}} {\ast}\boldsymbol{\delta }_{\mathbf{i}}\). The latter relation indicates that the set \((\Delta,+,{\ast})\) of the \(\boldsymbol{\delta }\)-operators, equipped with the operations of addition and the star-product, is not a ring. Expressions of the form \(A =\sum _{ n=0}^{w}\sum _{\mathbf{i\in I_{n}}}a_{\mathbf{i}}\boldsymbol{\delta }_{\mathbf{i}}\) are called \(\boldsymbol{\delta }\)-polynomials, where by I n we denote the set of multi-indexes with n elements. By convention \(\mathbf{I_{0}} =\{\boldsymbol{\delta } _{e}\}\). The \(\boldsymbol{\delta }\)-polynomials also work as functions transforming sequences to sequences as follows: Let A be a \(\boldsymbol{\delta }\)-polynomial and x(t) a sequence, then

$$\displaystyle{Ax(t) =\sum _{ n=0}^{w}\sum _{ \mathbf{i}=(i_{1},\ldots,i_{n})\in \mathbf{I}_{n}}a_{\mathbf{i}}x(t - i_{1})x(t - i_{2})\cdots x(t - i_{n})}$$

The star-product between \(\boldsymbol{\delta }\)-polynomials corresponds, as before, to the composition, in other words to the substitution of one polynomial into another. Indeed, if A,B are two \(\boldsymbol{\delta }\)-polynomials, then \(A {\ast} By(t) = A \circ By(t) = A(B(y(t))\). An addition of \(\boldsymbol{\delta }\)-polynomials is defined as \((A + B)x(t) = Ax(t) + Bx(t)\) and the following property holds: \(C {\ast} [A + B]\neq C {\ast} A + C {\ast} B\). All the above are applied straightforward in the case of \(\boldsymbol{\delta }\)-series, too, which is nothing but polynomials with an infinite number of terms. We can also extend the whole methodology so that to act not to a single sequence but to a pair of sequences. We can achieve that by means of the \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-operator. Indeed, let \(\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}\) be a \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-operator, \(\mathbf{i} = (i_{1},i_{2},\ldots,i_{n})\), \(\mathbf{j} = (j_{1},j_{2},\ldots,j_{m})\) two multi-indexes. This operator works as follows:

$$\displaystyle{\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}[x(t),y(t)] = x(t - i_{1})\cdots x(t - i_{n})y(t - j_{1})\cdots y(t - j_{m})}$$

Therefore, the \(\boldsymbol{\delta }\)-part of the \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-operator acts exclusive on the first sequence and the \(\boldsymbol{\epsilon }\)-part on the second. If either j = { e} or i = { e} then \(\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{e}[x(t),y(t)] =\boldsymbol{\delta } _{\mathbf{i}}x(t)\), \(\boldsymbol{\delta }_{e}\boldsymbol{\epsilon }_{\mathbf{j}}[x(t),y(t)] =\boldsymbol{\epsilon } _{\mathbf{j}}x(t)\). We can define the addition as follows:

$$\displaystyle{(\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} +\boldsymbol{\delta } _{\mathbf{{i}^{{\prime}}}}\boldsymbol{\epsilon }_{\mathbf{{j}^{{\prime}}}})[x(t),y(t)] =\boldsymbol{\delta } _{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}[x(t),y(t)] +\boldsymbol{\delta } _{\mathbf{{i}^{{\prime}}}}\boldsymbol{\epsilon }_{\mathbf{{j}^{{\prime}}}}[x(t),y(t)]}$$

Let \(A =\sum _{ n=0}^{\nu }\sum _{m=0}^{\mu }\sum _{(\mathbf{i,j})\in \mathbf{I}_{n}\times \mathbf{J}_{m}}c_{\mathbf{ij}}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}\) be a \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomial. This polynomial acts on a pair of sequences as follows:

$$\displaystyle{A[(x(t),y(t)] =\sum _{ n=0}^{\nu }\sum _{ m=0}^{\mu }\sum _{ (\mathbf{i,j})\in \mathbf{I}_{n}\times \mathbf{J}_{m}}c_{\mathbf{ij}}x(t - i_{1})\cdots x(t - i_{n})y(t - j_{1})\cdots y(t - j_{m})}$$

If A is a \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-series, then A[x(t), y(t)] is a Volterra series, containing products among delays of x(t) and y(t). In the case of linear polynomials (or linear series), A[x(t), y(t)] is a linear polynomial (or a linear series) of delays of x(t) and y(t). The star-product among \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-operators (or \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials or \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-series) corresponds to the composition among maps. Indeed, let B, C, A be \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials, if we substitute the polynomial B into the \(\boldsymbol{\delta }\)-part of A and C into the \(\boldsymbol{\epsilon }\)-part of A, we get a \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomial which corresponds to the composition, A ∘ [B, C] and is called the star-product of the polynomials A, B, C and is denoted by A ∗ [B, C].

We present now the D-operators. They are nothing but a pair of \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials, in other words:

$$\displaystyle{D = \left [\begin{array}{c} A\\ B \end{array} \right ] = \left [\begin{array}{c} \sum _{\mathbf{(i,j)\in I_{a}\times J_{a}}}a_{\mathbf{ij}}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \\ \sum _{\mathbf{(i,j)\in I_{b}\times J_{b}}}b_{\mathbf{ij}}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \end{array} \right ]}$$

If the above \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials are linear, then we speak about a linear D-operator. If instead of the \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials A and B we have the \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-series A and B, then the D-operator is a called a D-series.

Definition 1.

Let G and F be two D-operators:

$$\displaystyle{G = \left [\begin{array}{c} \sum _{\mathbf{(i,j)\in I_{g,1}}\times J_{g,1}}g_{\mathbf{ij}}^{(1)}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \\ \sum _{\mathbf{(i,j)\in I_{g,2}}\times J_{g,2}}g_{\mathbf{ij}}^{(2)}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \end{array} \right ],\quad F = \left [\begin{array}{c} \sum _{\mathbf{(i,j)\in I_{f,1}\times J_{f,1}}}f_{\mathbf{ij}}^{(1)}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \\ \sum _{\mathbf{(i,j)\in I_{f,2}\times J_{f,2}}}f_{\mathbf{ij}}^{(2)}\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}} \end{array} \right ]}$$

We say that G = F if and only if I g, k  = I f, k , J g, k  = J f, k , \(g_{ij}^{(k)} = f_{ij}^{(k)}\), k = 1, 2. In other words they have the same sets of multi-indexes and the same coefficients.

The next operations is a generalization of the foregoing definitions.

Definition 2.

Let us have two D-operators:

$$\displaystyle{D_{1} = \left [\begin{array}{c} A_{1} \\ B_{1} \end{array} \right ],\quad D_{2} = \left [\begin{array}{c} A_{2} \\ B_{2} \end{array} \right ]}$$

their dot-product and star-product are defined as:

$$\displaystyle{D_{1}\cdot D_{2} = \left [\begin{array}{c} A_{1} \cdot A_{2} \\ B_{1} \cdot B_{2} \end{array} \right ],\quad D_{1}{\ast}D_{2} = \left [\begin{array}{c} A_{1} {\ast} [A_{2},B_{2}] \\ B_{1} {\ast} [A_{2},B_{2}] \end{array} \right ]}$$

We can extend all the above to the case of \(\boldsymbol{\delta }\boldsymbol{\epsilon }\) series, in a similar way.

2D Polynomial Discrete Systems

In this section we present how we can use the D-operators in order to describe non-linear polynomial discrete systems. Let us start with polynomial discrete systems involving only one sequence. They have the form:

$$\displaystyle{ x(t) =\sum _{ k=1}^{\theta }\sum _{ \begin{array}{c}\mathbf{i}\in \mathbf{I}_{k} \\ \mathbf{i}=(i_{1},i_{2},\ldots,i_{k})\end{array}}c_{\mathbf{i}}x(t - 1 - i_{1})x(t - 1 - i_{2})\cdots x(t - 1 - i_{k}) }$$
(1)

with c i  ∈ R and I k , a finite set of multi-indexes of dimension k. We say that we assign to this system a set of initial conditions \(I =\{\gamma _{0},\gamma _{1},\ldots,\gamma _{s-1}\}\) if and only if x(0) = γ 0, x(1) = γ 1, , \(x(s - 1) =\gamma _{s-1}\), where s is the maximum delay appeared in (1). Starting from these initial conditions and using (1), we can calculate all the future evolution of the system, that is the quantities \(x(s),x(s + 1),x(s + 2),\ldots\) Now, by using the \(\boldsymbol{\delta }\)-polynomial \(A =\sum _{ k=1}^{\theta }\sum _{ \begin{array}{c}\mathbf{i}\in \mathbf{I}_{k} \\ \mathbf{i}=(i_{1},i_{2},\ldots,i_{k})\end{array}}c_{\mathbf{i}}\boldsymbol{\delta }_{\mathbf{i}}\), we can re-write the above system, shortly as \(x(n) = Ax(n - 1)\). By means of this notation the evolution of the system is described through the star-product. Indeed, it can be proved [3] that:

Theorem 1.

The evolution of the system 1 can be calculated by the formula: \(x(t) =\mathop{\underbrace{ A {\ast} A {\ast}\cdots {\ast} A}}\limits _{n-times}x(t - n) = {A}^{n}x(t - n)\) , \(t = s,s + 1,s + 2,\ldots\) , under the assumption that the same set of initial conditions I, has been used.

Let us come now to 2D Polynomial Discrete Systems, that is systems transforming a pair of sequences to a pair of sequences in a nonlinear polynomial way. Let us have the sequences x 1(t), x 2(t) and the system:

$$\displaystyle{x_{1}(t) =\sum _{ \alpha =1}^{{\alpha }^{{\prime}} }\sum _{\beta =1}^{{\beta }^{{\prime}} }\sum _{ \begin{array}{c}(\mathbf{i},\mathbf{j})\in \mathbf{I}_{\alpha }\times \mathbf{J}_{\beta } \\ \mathbf{i}=(i_{1},\ldots,i_{r}) \\ \mathbf{j}=(j_{1},\ldots,j_{\xi }) \end{array}}c_{\mathbf{ij}}^{(1)}x_{ 1}(t - i_{1})\cdots x_{1}(t - i_{\tau })x_{2}(t - j_{1})\cdots x_{2}(t - j_{\xi })}$$
$$\displaystyle{ x_{2}(t) =\sum _{ \alpha =1}^{{\alpha }^{{\prime\prime}} }\sum _{\beta =1}^{{\beta }^{{\prime\prime}} }\sum _{ \begin{array}{c}({\mathbf{i}}^{{\prime}},{\mathbf{j}}^{{\prime}})\in \mathbf{{I}^{{\prime}}}_{ \alpha }\times \mathbf{{J}^{{\prime}}}_{ \beta } \\ \mathbf{{i}^{{\prime}}}=(i_{ 1}^{{\prime}},\ldots,i_{ r}^{{\prime}}) \\ \mathbf{{j}^{{\prime}}}=(j_{ 1}^{{\prime}},\ldots,j_{\xi }^{{\prime}}) \end{array}}c_{\mathbf{{i}^{{\prime}}{j}^{{\prime}}}}^{(2)}x_{ 1}(t - i_{1}^{{\prime}})\cdots x_{ 1}(t - i_{{\tau }^{{\prime}}}^{{\prime}})x_{ 2}(t - j_{1}^{{\prime}})\cdots x_{ 2}(t - j_{{\xi }^{{\prime}}}^{{\prime}}) }$$
(2)

where \(\mathbf{I}_{a},\mathbf{I}_{\beta },\mathbf{J}_{a}^{{\prime}},\mathbf{J}_{\beta }^{{\prime}}\) sets of multi-indexes with α and β elements respectively. We say that we assign to this system, the following sets of initial values:

$$\displaystyle{I_{1} =\{ a_{0},a_{1},\ldots,a_{\rho -1}\}\quad,\quad I_{2} =\{ b_{0},b_{1},\ldots,b_{\sigma -1}\}}$$

if \(x_{1}(0) = a_{0},x_{1}(1) = a_{1},\ldots,x_{1}(\rho -1) = a_{\rho -1}\) and \(x_{2}(0) = b_{0},x_{2}(1) = b_{1},\ldots,x_{2}(\sigma -1) = b_{\sigma -1}\), where ρ and σ are the maximum delays of the x 1(t) and x 2(t) sequences correspondingly.

By means of the D-operators we can rewrite (2) as follows:

$$\displaystyle{\mathbf{x}(t) = \mathbf{G}\mathbf{x}(t-1),\quad \mathbf{x}(t) = \left [\begin{array}{c} x_{1}(t) \\ x_{2}(t) \end{array} \right ],\quad \mathbf{G} = \left [\begin{array}{c} G_{1} \\ G_{2} \end{array} \right ]}$$

where G 1, G 2 are proper \(\boldsymbol{\delta }\boldsymbol{\epsilon }\)-polynomials and G the corresponding D-operator.

The next definition ensures that two systems have the same dynamic behaviour.

Definition 3.

We say that two systems \(\mathbf{x}(t) = \mathbf{G}\mathbf{x}(t - 1)\) and \(\mathbf{z}(t) = \mathbf{F}\mathbf{z}(t - 1)\), F,G, D-operators, are equivalent, if x(t) = z(t), t = 1, 2, , whenever they operate under identical initial conditions.

It is trivial to be seen that this notion is an equivalence relation. The next theorem combines equivalence of dynamical systems with equality of D-operators.

Theorem 2.

[7] Let us have the systems \(\mathbf{x}(t) = \mathbf{G}\mathbf{x}(t - 1)\) and \(\mathbf{y}(t) = \mathbf{F}\mathbf{y}(t - 1)\) . These systems are equivalent if and only if the D-operators G and F are equal.

Finally, we can obtain a result similar to that of theorem 1, in the case of 2D Polynomial Discrete Systems. Indeed, the time evolution of the system (2) can be given by the formula:

$$\displaystyle{\left [\begin{array}{c} x_{1}(t) \\ x_{2}(t) \end{array} \right ] =\mathop{\underbrace{ D {\ast} D {\ast}\cdots {\ast} D}}\limits _{n-times}\mathbf{x}(t-n) = {D}^{n}\mathbf{x}(t-n),\quad t = s,s+1,s+2,\ldots }$$

T-Similarity

In this section we establish conditions which guarantee that the output (evolution) of a system is identically equal with the output of another system, under the same initial conditions through a proper change of coordinations procedure, obtained by means of the star-product and D-series [7, 8]. This will help us later to classify the nonlinear systems with respect to this property.

Let us present now the relevant definitions.

Definition 1.

A D-series T is called invertible if we can find another D-series T , such that \(\mathbf{{T}^{{\prime}}{\ast} T} = \left [\begin{array}{c} \boldsymbol{\delta }_{0}\\ \boldsymbol{\epsilon }_{ 0} \end{array} \right ]\)

Definition 2.

Two pairs of sequences \(\mathbf{x}(t) = \left [\begin{array}{c} x_{1}(t) \\ x_{2}(t) \end{array} \right ]\) and \(\mathbf{y}(t) = \left [\begin{array}{c} y_{1}(t) \\ y_{2}(t) \end{array} \right ]\), are called T-Similar, if there exists a nonsingular (invertible) series \(\mathbf{T} = \left [\begin{array}{c} T_{1} \\ T_{2} \end{array} \right ]\), such that y(t) = Tx(t).

The meaning of the above definition is that by means of T we can go from x(t) to y(t) and vice-versa. Let us now see how we can extend this notion in order for D-operators to be involved.

Definition 3.

Let \(\mathbf{G} = \left [\begin{array}{c} G_{1} \\ G_{2} \end{array} \right ]\), \(\mathbf{F} = \left [\begin{array}{c} F_{1} \\ F_{2} \end{array} \right ]\) be two D-operators. They are called T-similar, if we can find a series \(\mathbf{T} = \left [\begin{array}{c} T_{1} \\ T_{2} \end{array} \right ]\), such that:\(F_{1} {\ast} [T_{1},T_{2}] = T_{1} {\ast} [G_{1},G_{2}]\), \(F_{2} {\ast} [T_{1},T_{2}] = T_{2} {\ast} [G_{1},G_{2}]\) or shortly FT = TG.

Theorem 1.

T -similarity is an equivalence relation among D-operators.

If F and G are T-similar we write \(\mathbf{F}\stackrel{\mathbf{T}}{\sim }\mathbf{G}\). Equivalent classes are denoted by [F].

Theorem 2.

[7] Let \(\mathbf{x}(t) = \mathbf{G}\mathbf{x}(t - 1)\) , \(\mathbf{y}(t) = \mathbf{F}\mathbf{y}(t - 1)\) be two 2D Polynomial Discrete Systems. The sequences x (t), y (t) are T -similar, if and only if the D-operators G,F are T -similar.

The most interesting situation is when the D-operator F, is a linear one. In this case we speak for the linear T-similarity. In other words: Let us suppose that we have the given nonlinear D-operator G and the linear one L. We want to find a D-series T, such that L ∗ T = T ∗ G. Now two fundamental questions arise: First, what is the construction of T? Is it a simple series (and hence its convergence can be checked by classical techniques) or series of series (and thus its convergence cannot be easily checked). Second, how can we obtain the T-series? We shall establish two theorems dealing with the first question, that of the T-series construction.

  • Before we proceed with the calculations we need some terminology.

    $$\displaystyle{L_{\theta } =\sum _{ a=0}^{1}L_{\theta }^{(a,1-a)}\quad,\quad L_{\theta }^{(a,1-a)} =\sum _{ i=0}^{\nu }l_{\theta,i}^{(a,1-a)}\boldsymbol{\delta }_{ i}^{a}\boldsymbol{\epsilon }_{ i}^{1-a}\quad,\quad \theta = 1,2}$$
    $$\displaystyle{T_{\theta } =\sum _{ a=0}^{\infty }\sum _{ b=0}^{\infty }T_{\theta }^{(a,b)}\quad,\quad T_{\theta }^{(a,b)} =\sum _{\mathbf{ (i,j)\in I\times J}}t_{\theta,(\mathbf{i,j})}^{(a,b)}\boldsymbol{\delta }_{ \mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}\quad,\quad \theta = 1,2}$$
    $$\displaystyle{G_{\theta } =\sum _{ a=0}^{{a}^{{\prime}} }\sum _{b=0}^{{b}^{{\prime}} }G_{\theta }^{(a,b)}\quad,\quad G_{\theta }^{(a,b)} =\sum _{\mathbf{ (i,j)\in I\times J}}g_{\theta,(\mathbf{i,j})}^{(a,b)}\boldsymbol{\delta }_{ \mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}\quad,\quad \theta = 1,2}$$
  • By Q 0 we denote the matrix:

    $$\displaystyle{Q_{0} = \left (\begin{array}{cccc} L_{1}^{(1,0)} - G_{1}^{(1,0)} & L_{1}^{(0,1)} & - G_{2}^{(1,0)} & 0 \\ L_{2}^{(1,0)} & L_{2}^{(0,1)} - G_{1}^{(1,0)} & 0 & - G_{2}^{(1,0)} \\ - G_{1}^{(0,1)} & 0 &L_{1}^{(1,0)} - G_{2}^{(0,1)} & L_{1}^{(0,1)} \\ 0 & - G_{1}^{(0,1)} & L_{2}^{(1,0)} & L_{2}^{(0,1} - G_{2}^{(0,1)}\end{array} \right )}$$
  • By \(\mathcal{A}\) we denote the matrix:

    $$\displaystyle{\mathcal{A} = \left [\begin{array}{cccc} l_{1,0}^{(1,0)} - g_{1,0}^{(1,0)} & l_{1,0}^{(0,1)} & - g_{2,0}^{(1,0)} & 0 \\ l_{1,0}^{(2,0)} & l_{2,0}^{(0,1)} - g_{1,0}^{(1,0)} & 0 & - g_{2,0}^{(1,0)} \\ - g_{1,0}^{(0,1)} & 0 &l_{1,0}^{(1,0)} - g_{2,0}^{(0,1)} & l_{1,0}^{(0,1)} \\ 0 & - g_{1,0}^{(0,1)} & l_{2,0}^{(1,0)} & l_{2,0}^{(0,1)} - g_{2,0}^{(0,1)} \\ l_{1,1}^{(1,0)} - g_{1,1}^{(1,0)} & l_{1,1}^{(0,1)} & - g_{2,1}^{(1,0)} & 0 \\ l_{1,1}^{(2,0)} & l_{2,1}^{(0,1)} - g_{1,1}^{(1,0)} & 0 & - g_{2,1}^{(1,0)} \\ - g_{1,1}^{(0,1)} & 0 &l_{1,1}^{(1,0)} - g_{2,1}^{(0,1)} & l_{1,1}^{(0,1)} \\ 0 & - g_{1,1}^{(0,1)} & l_{2,1}^{(1,0)} & l_{2,1}^{(0,1)} - g_{2,1}^{(0,1)}\\ & \vdots & & \vdots \\ l_{1,\nu }^{(1,0)} - g_{1,\nu }^{(1,0)} & l_{1,\nu }^{(0,1)} & - g_{2,\nu }^{(1,0)} & 0 \\ l_{1,\nu }^{(2,0)} & l_{2,\nu }^{(0,1)} - g_{1,\nu }^{(1,0)} & 0 & - g_{2,\nu }^{(1,0)} \\ - g_{1,\nu }^{(0,1)} & 0 & l_{1,\nu }^{(1,0)} - g_{2,\nu }^{(0,1)} & l_{1,\nu }^{(0,1)} \\ 0 & - g_{1,\nu }^{(0,1)} & l_{2,\nu }^{(1,0)} & l_{2,\nu }^{(0,1)} - g_{2,\nu }^{(0,1)}\end{array} \right ]}$$
  • The pair of equations:

    $$\displaystyle{L_{i}^{(1,0)}{\ast}T_{ 1}^{(n,m)}+L_{ i}^{(0,1)}{\ast}T_{ 2}^{(n,m)}-T_{ i}^{(n,m)}{\ast}[G_{ 1}^{(1,0)},G_{ 2}^{(0,1)}]-T_{ i}^{(m,n)}{\ast}[G_{ 1}^{(0,1)},G_{ 2}^{(1,0)}]-}$$
    $$\displaystyle{-\sum _{ \begin{array}{c}a+b=k,a\neq n,b\neq m \\ a(x_{1}+y_{1})+b(x_{2}+y_{2})=k \\ a,b,x_{1},y_{1},x_{2},y_{2}\in \mathbf{N} \end{array}}T_{i}^{(a,b)} {\ast} [G_{ 1}^{(x_{1},y_{1})},G_{ 2}^{(x_{2},y_{2})}] =}$$
    $$\displaystyle{=\sum _{ \begin{array}{c}a+b<k \\ a(x_{1}+y_{1})+b(x_{2}+y_{2})=k \\ a,b,x_{1},y_{1},x_{2},y_{2}\in \mathbf{N} \end{array}}T_{i}^{(a,b)} {\ast} [G_{ 1}^{(x_{1},y_{1})},G_{ 2}^{(x_{2},y_{2})}],\quad i = 1,2}$$

    with the coefficients of the T i (n, m), \(n + m = k\) as unknowns, is called the basic nonlinear k-degree system.

  • The matrix of the coefficients of the term \(\boldsymbol{\delta }_{\mathbf{i}}\boldsymbol{\epsilon }_{\mathbf{j}}\), which arises from the left hand part of the above equation, is denoted by C i,j .

  • The matrix of coefficients of the above system is denoted by Q k . The corresponding augmented matrix is denoted by Q k (T), where we use this notation to indicate the dependence from the polynomials T i (a, b), a + b < k.

  • The set of the solutions of the 1-degree system is denoted by \(\Gamma \).

  • The set \(\mathcal{S}\) is defined as:

    $$\displaystyle{\mathcal{S} =\{ T \in \Gamma: rank(Q_{k}) = rank(Q_{k}^{{\ast}}(T)),k = 1,2,3,\ldots \}}$$

We present now the main theorem:

Theorem 3.

[7] Let L be a given linear two dimension discrete system and G a polynomial one. Let T be the series that solves the T-similarity problem, i.e. L ∗ T = T ∗ G . Then,

  1. (i)

    If |Q 0 | = 0 and \(\mathcal{S}\neq \varnothing \) then the T -series is a simple series.

  2. (ii)

    If \(rank(\mathcal{A}) < 4\) and det(C i,j ) ≠ 0 for every i,j , then the T -series is a series of series.

Let us pass now to the second problem that of calculating the different parts of the series T. To achieve that we use the next procedure:

  • By solving the system:

    $$\displaystyle\begin{array}{rcl} & & L_{1}^{(1,0)} {\ast} T_{ 1}^{(1,0)} + L_{ 1}^{(0,1)} {\ast} T_{ 2}^{(1,0)} = T_{ 1}^{(1,0)} {\ast} G_{ 1}^{(1,0)} + T_{ 1}^{(0,1)} {\ast} G_{ 2}^{(1,0)} \\ & & L_{1}^{(1,0)} {\ast} T_{ 1}^{(0,1)} + L_{ 1}^{(0,1)} {\ast} T_{ 2}^{(0,1)} = T_{ 1}^{(1,0)} {\ast} G_{ 1}^{(0,1)} + T_{ 1}^{(0,1)} {\ast} G_{ 2}^{(0,1)} {}\end{array}$$
    (3)
    $$\displaystyle\begin{array}{rcl} & & L_{2}^{(1,0)} {\ast} T_{ 1}^{(1,0)} + L_{ 2}^{(0,1)} {\ast} T_{ 2}^{(1,0)} = T_{ 2}^{(1,0)} {\ast} G_{ 1}^{(1,0)} + T_{ 2}^{(0,1)} {\ast} G_{ 2}^{(1,0)} \\ & & L_{2}^{(1,0)} {\ast} T_{ 1}^{(0,1)} + L_{ 2}^{(0,1)} {\ast} T_{ 2}^{(0,1)} = T_{ 2}^{(1,0)} {\ast} G_{ 1}^{(0,1)} + T_{ 2}^{(0,1)} {\ast} G_{ 2}^{(0,1)} {}\end{array}$$
    (4)

    we get the linear parts of the requested series. Since we have to do with a homogeneous system the relation | Q 0 |  = 0 guarantees that we get an infinite number of polynomial solutions (\(T_{i}^{(1,0)},T_{j}^{(0,1)}\) are polynomials). Otherwise a series solution is obtained (\(T_{i}^{(1,0)},T_{j}^{(0,1)}\) are series).

  • Now, we go to the quadratic part. It consists from the next equations:

    $$\displaystyle\begin{array}{rcl} L_{1}^{(1,0)}& & {\ast}\ T_{ 1}^{(2,0)} + L_{ 1}^{(0,1)} {\ast} T_{ 2}^{(2,0)} = T_{ 1}^{(1,0)} {\ast} G_{ 1}^{(2,0)} + T_{ 1}^{(0,1)} {\ast} G_{ 2}^{(2,0)} + \\ & & +T_{1}^{(2,0)} {\ast} G_{ 1}^{(1,0)} + T_{ 1}^{(0,2)} {\ast} G_{ 2}^{(1,0)} + T_{ 1}^{(1,1)} {\ast} [G_{ 1}^{(1,0)},G_{ 2}^{(1,0)}] {}\end{array}$$
    (5)
    $$\displaystyle\begin{array}{rcl} L_{2}^{(1,0)}& & {\ast}\ T_{ 1}^{(2,0)} + L_{ 2}^{(0,1)} {\ast} T_{ 2}^{(2,0)} = T_{ 2}^{(1,0)} {\ast} G_{ 1}^{(2,0)} + T_{ 2}^{(0,1)} {\ast} G_{ 2}^{(2,0)} + \\ & & +T_{2}^{(2,0)} {\ast} G_{ 1}^{(1,0)} + T_{ 2}^{(0,2)} {\ast} G_{ 2}^{(1,0)} + T_{ 2}^{(1,1)} {\ast} [G_{ 1}^{(1,0)},G_{ 2}^{(1,0)}] {}\end{array}$$
    (6)
    $$\displaystyle\begin{array}{rcl} L_{1}^{(1,0)}& & {\ast}\ T_{ 1}^{(0,2)} + L_{ 1}^{(0,1)} {\ast} T_{ 2}^{(0,2)} = T_{ 1}^{(1,0)} {\ast} G_{ 1}^{(0,2)} + T_{ 1}^{(0,1)} {\ast} G_{ 2}^{(0,2)} + \\ & & +T_{1}^{(2,0)} {\ast} G_{ 1}^{(0,1)} + T_{ 1}^{(0,2)} {\ast} G_{ 2}^{(0,1)} + T_{ 1}^{(1,1)} {\ast} [G_{ 1}^{(0,1)},G_{ 2}^{(0,1)}] {}\end{array}$$
    (7)
    $$\displaystyle\begin{array}{rcl} L_{2}^{(1,0)}& & {\ast}\ T_{ 1}^{(0,2)} + L_{ 2}^{(0,1)} {\ast} T_{ 2}^{(0,2)} = T_{ 2}^{(1,0)} {\ast} G_{ 1}^{(0,2)} + T_{ 2}^{(0,1)} {\ast} G_{ 2}^{(0,2)} + \\ & & +T_{2}^{(2,0)} {\ast} G_{ 1}^{(0,1)} + T_{ 2}^{(0,2)} {\ast} G_{ 2}^{(0,1)} + T_{ 2}^{(1,1)} {\ast} [G_{ 1}^{(0,1)},G_{ 2}^{(0,1)}] {}\end{array}$$
    (8)
    $$\displaystyle\begin{array}{rcl} L_{1}^{(0,1)}& & {\ast}\ T_{ 1}^{(1,1)} + L_{ 1}^{(0,1)} {\ast} T_{ 2}^{(1,1)} = T_{ 1}^{(0,1)} {\ast} G_{ 1}^{(1,1)} + T_{ 1}^{(0,1)} {\ast} G_{ 2}^{(1,1)} + \\ & & +T_{1}^{(1,1)} {\ast} [G_{ 1}^{(0,1)},G_{ 2}^{(1,0)}] + T_{ 1}^{(1,1)} {\ast} [G_{ 1}^{(1,0)},G_{ 2}^{(0,1)}] {}\end{array}$$
    (9)
    $$\displaystyle\begin{array}{rcl} L_{2}^{(0,1)}& & {\ast}\ T_{ 1}^{(1,1)} + L_{ 2}^{(0,1)} {\ast} T_{ 2}^{(1,1)} = T_{ 2}^{(0,1)} {\ast} G_{ 1}^{(1,1)} + T_{ 2}^{(0,1)} {\ast} G_{ 2}^{(1,1)} + \\ & & +T_{2}^{(1,1)} {\ast} [G_{ 1}^{(0,1)},G_{ 2}^{(1,0)}] + T_{ 2}^{(1,1)} {\ast} [G_{ 1}^{(1,0)},G_{ 2}^{(0,1)}] {}\end{array}$$
    (10)

Relations (5), (6) arise by comparing the \(\boldsymbol{\delta }_{i}\boldsymbol{\delta }_{j}\) terms, (7), (8) by comparing the \(\boldsymbol{\epsilon }_{i}\boldsymbol{\epsilon }_{j}\) terms and (9), (10) the \(\boldsymbol{\delta }_{i}\boldsymbol{\epsilon }_{j}\) terms. Substituting the solutions we have already found from the linear part we get the quadratic quantities \(T_{i}^{(2,0)},T_{j}^{(0,2)},T_{k}^{(1,1)}\). We repeat the procedure for the cubic terms and so on. This method will finally endow us with the desired series T.

An interesting result, connected with the above iteration, is the next corollary:

Corollary 1.

[7] If the linear equations, i.e. (3),(4) accept a series as solution, then the T is a series of series.

Levels of Model Complexity

Complex systems appears in many fields of contemporary science, and different communities have different aspects about complexity and how they ranked it [1, 2]. In this section we shall try to approach this issue for 2D Polynomial Discrete Systems, using the mathematical tools developed previously. Specifically, we have described a procedure for checking the equivalence of a nonlinear discrete system with a linear one. This was achieved via a D-series, named T. The construction of T determines the kind of the model complexity or how “hard” the nonlinearity is. If, for instance, T converges, then we speak for a “light” complexity, otherwise for a strong one. If T is a simple series or consists from an infinite sum of series (series of series), this will influence the kind of complexity since checking convergence in the latter case is a very difficult task. The nature of L also plays an important role. If, for instance, it is stable, then the level of complexity is less than the level of complexity which corresponds to an unstable L. We summarize the different cases of complexity degrees in the next table:

T-Series

Complexity degree

 

L Stable

L Unstable

A polynomial

0

0+

An invertible, convergence, simple series

1

1+

A convergence simple series

1.5

1.5+

A simple series

2

2+

An invertible, convergence, series of series

3

3+

A convergence series of series

3.5

3.5+

A series of series

4

4+

Examples

Example 1.

Let us have the linear system:

$$\displaystyle{x(t) = x(t - 1) + 2x(t - 2) + \frac{1} {2}y(t - 1)}$$
$$\displaystyle{y(t) = \frac{7} {2}x(t - 1) - 2y(t - 1) + 2y(t - 2)}$$

We want to see how this will be equivalent to another linear one. This is just to understand the procedures and to see how our approach fits with well-known cases. The linear system “target”, will be:

$$\displaystyle{u(t) = -\frac{3} {2}u(t - 1) + 2u(t - 2) - 3v(t - 1)}$$
$$\displaystyle{v(t) = -u(t - 1) + \frac{1} {2}v(t - 1) + 2v(t - 2)}$$

Using the D-operators, we get the next descriptions:

$$\displaystyle{\left [\begin{array}{c} x(t)\\ y(t) \end{array} \right ] = \left [\begin{array}{c} \boldsymbol{\delta }_{0} + 2\boldsymbol{\delta }_{1} + \frac{1} {2}\boldsymbol{\epsilon }_{0} \\ \frac{7} {2}\boldsymbol{\delta }_{0} - 2\boldsymbol{\epsilon }_{0} + 2\boldsymbol{\epsilon }_{1}\end{array} \right ]\left [\begin{array}{c} x(t - 1)\\ y(t - 1) \end{array} \right ]}$$
$$\displaystyle{\hspace{10.0pt} \left [\begin{array}{c} u(t)\\ v(t) \end{array} \right ] = \left [\begin{array}{c} -\frac{3} {2}\boldsymbol{\delta }_{0} + 2\boldsymbol{\delta }_{1} - 3\boldsymbol{\epsilon }_{0} \\ -\boldsymbol{\delta }_{0} + \frac{1} {2}\boldsymbol{\epsilon }_{0} + 2\boldsymbol{\epsilon }_{1}\end{array} \right ]\left [\begin{array}{c} u(t - 1)\\ v(t - 1) \end{array} \right ]}$$

and shortly \(\mathbf{x}(t) = \mathbf{G}\mathbf{x}(t - 1)\), \(\hat{\mathbf{x}}(t) = \mathbf{L}\hat{\mathbf{x}}(t - 1)\). We want to find series T 1, T 2, such that the following equations hold:

$$\displaystyle{\mathbf{L}{\ast}\mathbf{T} = \mathbf{T}{\ast}\mathbf{G} \Rightarrow \left \{\begin{array}{l} L_{1} {\ast} [T_{1},T_{2}] = T_{1} {\ast} [G_{1},G_{2}] \\ L_{2} {\ast} [T_{1},T_{2}] = T_{2} {\ast} [G_{1},G_{2}]\end{array} \right.}$$

For the sake of the computation we arbitrarily set: \(T_{1} = w_{1,0}\boldsymbol{\delta }_{0} + w_{1,1}\boldsymbol{\delta }_{1} + h_{1,0}\boldsymbol{\epsilon }_{0} + h_{1,1}\boldsymbol{\epsilon }_{1}\), \(T_{2} = w_{2,0}\boldsymbol{\delta }_{0} + w_{2,1}\boldsymbol{\delta }_{1} + h_{2,0}\boldsymbol{\epsilon }_{0} + h_{2,1}\boldsymbol{\epsilon }_{1}\); we could of course, take any other number of terms for the series T 1, T 2. By equating the coefficients and solving the corresponding system of equations we get:

$$\displaystyle{w_{1,0} = h_{1,0} - 6h_{2,0}\quad,\quad w_{1,1} = h_{1,1} - 6h_{2,1}}$$
$$\displaystyle{w_{2,0} = -2h_{1,0} + 5h_{2,0}\quad,\quad w_{2,1} = -2h_{1,1} + 5h_{2,1}}$$

and thus a transformation which solves the problem is:

$$\displaystyle{T_{1} = (h_{1,0} - 6h_{2,0})\boldsymbol{\delta }_{0} + (h_{1,1} - 6h_{2,1})\boldsymbol{\delta }_{1} + h_{1,0}\boldsymbol{\epsilon }_{0} + h_{1,1}\boldsymbol{\epsilon }_{1}}$$
$$\displaystyle{T_{2} = (-2h_{1,0} + 5h_{2,0})\boldsymbol{\delta }_{0} + (-2h_{1,1} + 5h_{2,1})\boldsymbol{\delta }_{1} + h_{2,0}\boldsymbol{\epsilon }_{0} + h_{2,1}\boldsymbol{\epsilon }_{1}}$$

with h ij  ∈ R. Since the T-similarity problem in this case accepts a polynomial solution, and L is unstable, we say that the original linear system has a complexity degree equal to 0+. If we could solve the problem with a stable L, then complexity degree would be equal to 0.

Example 2.

Let us consider now the nonlinear system:

$$\displaystyle{x(t + 1) = x(t) + y(t) - {x}^{2}(t)}$$
$$\displaystyle{y(t + 1) = x(t)}$$

we want to examine if it can be equivalent with the next linear system (the “target”) and thus to find its complexity degree.

$$\displaystyle{z(n + 1) = z(n) - z(n - 1) + w(n) + \frac{1} {2}(1 -\sqrt{5})w(n - 1)}$$
$$\displaystyle{w(n + 1) = z(n) + \frac{1} {2}(1 + \sqrt{5})z(n - 1) + w(n - 1)}$$

Using the D-operators, we get the next descriptions: \(\mathbf{x}(t + 1) = \mathbf{G}\mathbf{x}(t)\), \(\hat{\mathbf{x}}(t + 1) = \mathbf{L}\hat{\mathbf{x}}(t)\) where:

$$\displaystyle{\mathbf{G} = \left [\begin{array}{c} \boldsymbol{\delta }_{0} +\boldsymbol{\epsilon } _{0} -\boldsymbol{\delta }_{0}^{2} \\ \boldsymbol{\delta }_{0} \end{array} \right ]\quad,\quad \mathbf{L} = \left [\begin{array}{c} \boldsymbol{\delta }_{0} -\boldsymbol{\delta }_{1} +\boldsymbol{\epsilon } _{0} + \frac{1} {2}(1 -\sqrt{5})\boldsymbol{\epsilon }_{1} \\ \boldsymbol{\delta }_{0} + \frac{1} {2}(1 + \sqrt{5})\boldsymbol{\delta }_{1} +\boldsymbol{\epsilon } _{1} \end{array} \right ]}$$

First of all we see that | Q 0 | ≠ 0 and thus the problem accepts a simple series as a solution. This means that it will be of complexity degree either 1 or 3. To calculate the series T 1, T 2, such that the following equations hold: LT = TG we follow the procedure of the previous section and we take:

$$\displaystyle\begin{array}{rcl} & & \quad \mathbf{T}_{1} = (\Gamma + A)\boldsymbol{\delta }_{0} + \left (-\frac{1} {2}(1 + \sqrt{5})\Gamma + \Delta -\frac{1} {2}(1 + \sqrt{5})A + B\right )\boldsymbol{\delta }_{1} + A\boldsymbol{\epsilon }_{0} {}\\ & & \qquad \quad + \left (\Gamma -\frac{1} {2}(1 + \sqrt{5})A + B\right )\boldsymbol{\epsilon }_{1} + \frac{1} {2}A\boldsymbol{\delta }_{0}^{2} + \frac{1} {2}\Gamma \boldsymbol{\epsilon }_{0}^{2} + (A + \Gamma )\boldsymbol{\delta }_{ 0}\boldsymbol{\epsilon }_{0} {}\\ & & \qquad \quad -\frac{1} {6}A\boldsymbol{\delta }_{0}^{3} -\frac{1} {6}\Gamma \boldsymbol{\epsilon }_{0}^{3} + \frac{1} {2}(A + \Gamma )\boldsymbol{\delta }_{0}\boldsymbol{\epsilon }_{0}^{2} + \frac{1} {2}(3A + \Gamma )\boldsymbol{\delta }_{0}^{2}\boldsymbol{\epsilon }_{ 0} + \cdots \cdots {}\\ & & \mathbf{T}_{2} = A\boldsymbol{\delta }_{0}+B\boldsymbol{\delta }_{1}+\Gamma \boldsymbol{\epsilon }_{0}+\Delta \boldsymbol{\epsilon }_{1}+\frac{1} {2}\Gamma _{0}^{2}+\frac{1} {2}(A-\Gamma )\boldsymbol{\epsilon }_{0}^{2}+A\boldsymbol{\delta }_{ 0}\boldsymbol{\epsilon }_{0}-\frac{1} {6}\Gamma \boldsymbol{\delta }_{0}^{3} + \frac{1} {6}(\Gamma - A)\boldsymbol{\epsilon }_{0}^{3} {}\\ & & \qquad \quad +\frac{1} {2}A\boldsymbol{\delta }_{0}\boldsymbol{\epsilon }_{0}^{2}+\left (\frac{1} {2}A+\Gamma \right )\boldsymbol{\delta }_{0}^{2}\boldsymbol{\epsilon }_{ 0}+\cdots \cdots {}\\ \end{array}$$

where \(A,B,\Gamma,\Delta \) arbitrary parameters take real values. If we are able to find values for these parameters which can guarantee the convergence of the series, the complexity degree will be equal to 1+.