1 Introduction

Since Ott et al.[1] firstly proposed the method of chaos control in 1990, chaos control has attracted great attention and lots of successful experiments have been reported[217], such as feedback control[5, 6], impulsive control[7], back-stepping method[8], adaptive control[914], sliding mode control[15, 16], fuzzy control[17], finite-time control and so on. However, most methods aim at a kind of chaotic system and they are complex both in design and implementation. How to realize the control of chaotic systems by designing a simple, physically available and general controller is particularly significant both for theoretical research and practical applications[18].

One practicable method is the linear feedback, but it is difficult to find the suitable feedback constant[10]. Recently, Huang proposed an adaptive feedback control method to effectively synchronize two almost arbitrary identical chaotic systems in his series of papers[9, 13, 1921]. However, a uniform Lipchitz condition on the nonlinear vector field is always assumed in advance, and these papers could not guarantee the feasibility of the adaptive control techniques (ACT, where the control strength \({\dot \varepsilon_i} = - {r_i}{e_i}^2,i = 1,2, \cdots, n\) i = −r i e i 2, i = 1, 2, ⋯,n with e i being the error) in those non-uniform Lipschitz systems[11]. Lin[11] generalized the ACT to the systems which only satisfy local Lipschitz condition. In fact, the above methods[9, 11, 13, 1921] are speed-gradient methods in essence. The initial value of the control strength ε i is difficult to choose, and the square of error is not easy to implement in practice.

Besides, they all did not study the effect of the convergence factor. In fact, the convergence factor has a range, and the choice of convergence factor is of great importance to ensure a limited convergence time, which has great practical significance. It also affects the property of final control strength that helps us understand the essence of chaos control. However, there are few research on how the convergence factor affects the convergence time and the property of the final control strength.

Inspired by these discussions, we represent a novel control strength calculated by accumulative error without difficulty in choosing initial values, and discuss how the convergence factor affects the convergence of systems and the characteristics of the final control strength. The rest of the paper is organized as follows. The algorithm of accumulative error and its mathematical proof are given in Section 2. In Section 3, two methods of choosing convergence factor are discussed. The control scheme is applied to three examples in Section 4. Finally, the conclusions are drawn in Section 5.

2 Control scheme of accumulative error

Considering the following n-dimensional continuous chaotic system,

$$\dot x = f(x)$$
(1)

where x =[x 1,x 2, ⋯, x n ]TR n is the state variable, and f(x) = [f 1 (x),f 2(x), ⋯,f n (x)] ∈ R n is n-dimensional continuous differentiable nonlinear vector function which satisfies local Lipchitz condition, i.e., for any compact set S, there exists a positive constant l i (S) such that

$$|{f_i}(x) - {f_i}(y)|\leqslant {l_i}(S)||x - y||,\quad \forall x,\,\,y \in S$$
(2)

where ∥·∥ denotes the Euclidean norm and the existence of l i (S) is related to the choice of the compact set S. There exists a minimal coefficient l (x,y) for a certain point (x,y).

Definition 1. For a certain point (x,y), the minimal local Lipchitz coefficient l (x,y) is a constant satisfying

$$\matrix{{\sum\limits_{i = 1}^n {({x_i} - {y_i})} ({f_i}(x) - {f_i}(y)) = {l_{(x,y)}}||x - y|{|^2} \leqslant } \hfill \cr {(\sum\limits_{i = 1}^n {{l_i}} (S))||x - y|{|^2},\quad \forall x,y \in S.} \hfill \cr }$$
(3)

The minimal local Lipchitz coefficient l (x,y) is a function of (x,y), and it may be negative, positive or zero. Its upper bound is

$$\sum\limits_{i = 1}^n {{l_i}} (S).$$

Systems in the form of (1) satisfying (2, 3) include all the well-known chaotic and hyperchaotic systems.

The unstable fixed point is denoted as \({x^*} = {[x_1^*,x_2^*, \cdots, x_n^*]^{\rm{T}}}\) which satisfies f(x*) = 0. To stabilize this equilibrium, we design the controlled system as

$$\dot x = f(x) + \varepsilon (x - {x^*})$$
(4)
$${\dot \varepsilon _i} = {\gamma _i}({x_i} - x_i^*),\qquad i = 1,2, \cdots, n$$
(5)

where γ i is the convergence factor. It may be negative or positive. The value of γ i is related to initial points and decides the convergence time.

Because of the negative feedback, the initial value of ε(t) is a non-positive constant vector with a small absolute value. For simplicity, setting ε(0) = 0, we get the control strength from (5) as

$${\varepsilon _i}(t) = {\gamma _i}\int_0^t {({x_i}(} \omega) - x_i^*){\rm{d}}\omega, \qquad i = 1,2, \cdots,n.$$
(6)

It means the controlling term changes with error, starting from zero and ending with zero. It can be regarded as a non-invasive control. It is the product of error and its integral in essence, but not the proportional integral (PI) control. It is significantly different from the usual linear feedback in [5, 12], where ε i is a constant or piecewise constant. Especially, the fixed control strength is used in linear feedback wherever the initial points start, thus the strength must be maximal, which means a kind of waste in practice[13].

Definition 2. The control strength ε i (t) in (6) is a linear function of the integration of error (accumulative error in essence) in the past time. Therefore, we call this control method (4, 5, 6) as accumulative error control (AEC).

The control strength (6) is different from the following control strength in [11, 13, 14],

$${\varepsilon _i}(t) = - {r_i}\int_0^t {{{({x_i} - x_i^*)}^2}} {\rm{d}}t,\quad i = 1,2, \cdots, n$$
(7)

where r i is an arbitrarily positive constant.

  • Remark 1. The control strength (6) is a linear function of accumulative error, which possesses certain physical meaning in real systems and can be realized by integral circuits, such as ordinary resistance-capacitance (RC) circuits. Therefore, it is simpler and much easier to implement than (7).

  • Remark 2. The control strength (6) does not require calculation of initial value, while the initial value of ε i (t) in (7) is difficult to choose[11].

In fact, the control strength (7) can be deduced by the speed-gradient method. We take the error criterion function as

$$Q(x,t) = {1 \over 2}\sum\limits_{i = 1}^n {{{({x_i} - x_i^*)}^2}} $$
(8)

and its time derivative is

$${{{\rm{d}}Q(x,t)} \over {{\rm{d}}t}} = \sum\limits_{i = 1}^n {({x_i} - x_i^*)} ({f_i}(x) + \varepsilon ({x_i} - x_i^*)).$$
(9)

According to the speed-gradient method, the change of ε i is opposite to the gradient direction of \({{{\rm{d}}Q(x,t)} \over {{\rm{d}}t}}\) so we obtain the adaption rule

$${\dot \varepsilon _i} = - {r_i}{({x_i} - x_i^*)^2}.$$
(10)

Therefore, the control strength (6), rather than (7), is a novel control method. Next we will verify its stability.

Noticing that the boundedness of every trajectory produced by system (4) is crucial to the achievement of chaos control, we are to investigate under what kind of condition those trajectories of system (4) are always bounded.

We introduce a Lyapunov function as

$$V(x,t) = {(x - {x^*})^{\rm{T}}}(x - {x^*}).$$
(11)

Select an arbitrary initial point

$$x = {[{x_1}(0),{x_2}(0), \cdots, {x_n}(0)]^{\rm{T}}} \in {{\bf{R}}^n}.$$
(12)

Set a constant as

$$p = \sum\limits_{i = 1}^n {({x_i}(} 0) - x_i^*{)^2}$$

and construct a closed ball by

$$Ph{i_{\sqrt {\sigma p} }}({x^*}) = \{ x \in {{\bf{R}}^n}|\sum\limits_{i = 1}^n {{{({x_i} - x_i^*)}^2}} \leqslant \sigma p\}$$
(13)

where σ is an arbitrarily positive constant lager than 1. It follows from (3) that there exists a minimal local Lipchitz coefficient l(x,x*) for any \(x \in {\Phi _{\sqrt {\sigma p} }}({x^*})\) satisfying

$$\matrix{{\sum\limits_{i = 1}^n {({x_i} - x_i^*)} ({f_i}(x) - {f_i}({x^*})) = {l_{(x,{x^*})}}||x - {x^*}|{|^2} \leqslant } \hfill \cr {(\sum\limits_{i = 1}^n {{l_i}} (\Phi))||x - {x^*}|{|^2},\quad \forall x,{x^*} \in \Phi.} \hfill \cr }$$
(14)

Due to the local Lipschitz condition, the solution of system (4), denoted by x(t) = x(t;0,x(0)), is unique on its maximal existent interval [0,t 1). Next, it is to prove that the ending point t 1 is infinity. First, we claim that \(x(t) \in {\Phi _{\sqrt {\sigma p} }}({x^*})\) for any t ∈ [0,t 1], and we perform the reasoning by contradiction. If this claim is not true, there exists a time constant

$${t_2} = \inf \{ t \in (0,{t_1})|V(x,t)\geqslant\sigma p\} $$
(15)

which is the first time trajectory x(t) hits the boundary of the ball. The derivative of function V(x, t) for t ∈ [0,t 2] is

$$\matrix{{\dot V(x,t) = 2\sum\limits_{i = 1}^n {({x_i} - x_i^*)} ({f_i}(x) - {f_i}({x^*})) + } \hfill \cr {2\sum\limits_{i = 1}^n {{\varepsilon _i}} {{({x_i} - x_i^*)}^2} = } \hfill \cr {2l(x,{x^*})||x - {x^*}|{|^2} + 2{{(x - {x^*})}^{\rm{T}}}} \hfill \cr {{\rm{diag}}\{ {\varepsilon _1},{\varepsilon _2}, \cdots, {\varepsilon _n}\} (x - {x^*}) = } \hfill \cr {2{{(x - {x^*})}^{\rm{T}}}{\rm{diag}}\{ l(x,{x^*}) + {\varepsilon _1},l(x,{x^*}) + } \hfill \cr {{\varepsilon _2}, \cdots, l(x,{x^*}) + {\varepsilon _n}\} (x - {x^*})\leqslant} \hfill \cr {2{{(x - {x^*})}^{\rm{T}}}{\rm{diag}}\{ \sum\limits_{i = 1}^n {{l_i}} (\Phi) + {\varepsilon _1},} \hfill \cr {\sum\limits_{i = 1}^n {{l_i}} (\Phi) + {\varepsilon _2}, \cdots, \sum\limits_{i = 1}^n {{l_i}} (\Phi) + {\varepsilon _n}\} (x - {x^*})\leqslant 0} \hfill \cr }$$
(16)

under the condition that

$$l(x,{x^*}) + {\varepsilon _i}\leqslant \sum\limits_{i = 1}^n {{l_i}} (\Phi) + {\varepsilon _1} < 0.$$
(17)

For any t ∈ [0, t 2],each x i (t) is bounded, so there exists γ i satisfying

$${\gamma _i}\int_0^t {({x_i} - x_i^*)} {\rm{d}}t < - \sum\limits_{i = 1}^n {{l_i}} (\Phi)\leqslant - l(x,{x^*})$$
(18)

where −l(x,x*) is shorted as the opposite of Lipchitz coefficient. It is a function of variable x. The range of ε i < −l(x,x*) is larger than that of \({\varepsilon _i} < - \sum\limits_{i = 1}^n {{l_i}} (\Phi)\), which enlarges the condition of chaos control.

In fact, the sign of γ i makes ε i < 0, and the value of γ i makes ε i < −l(x,x*). Although ε i and l(x, x*) vary with \(x,\,{\varepsilon _i}(t) = {\gamma _i}\int_0^t {({x_i} - x_i^*)} {\rm{d}}t\) dt is the integration of error and the sign of \({({x_i} - x_i^*)}\) changes at some moments may not affect the truth of ε i < −l(x,x*). However, for the situation that the sign of \({({x_i} - x_i^*)}\) change frequently, such as some trigonometric functions, we adopt

$${\gamma _i} = {\gamma _{i0}}\,{\rm{sgn}}({x_i} - x_i^*)\,\,{\rm{or}}\,\,{\gamma _i} = - {\gamma _{i0}}\,{\rm{sgn}}({x_i} - x_i^*)$$
(19)

where γ i0 is a constant. Therefore, the value of γ i with an appropriate γ i is always less than that of −l(x,x*) for the same variable x.

Obviously, it follows that V(x(t),t) is decreasing on [0,t 2], which leads to

$$\sigma p\leqslant V(x({t_2}),{t_2})\leqslant V(x(0),0)\leqslant \sigma p.$$
(20)

This is a contradiction, which consequently validates our assertion. Trajectory \(x(t){|_{[0,{t_1}]}}\), starting inside the bounded ball, does not leave this ball yet. Thus, by virtue of the theory of ordinary differential equations, we can extend the ending point t 1 to +∞, and consequently prove that x(t)∣[0,+∞) is bounded under condition (18).

The largest invariant set of

$$E = \{ x|x \in {\Phi _{\sqrt {\sigma p} }}({x^*}),\dot V = 0\} $$
(21)

only has one element x = x *. In light of the LaSalle invariance principle[22], we conclude that trajectory x(t) starting from x(0) surely approaches x* as t → +∞. Therefore, the controlled system (4) is asymptotically stable at x *. It means there exists a finite time t f such that ∥ xx* ∥<δ for t ∈ [t f , +∞), where δ is an arbitrarily small positive constant or accuracy demand. Therefore, ε i (t) approaches a negative constant \(\varepsilon _i^*\) at \(t = {t_f}\).

Furthermore, condition (18) is not necessarily true for any t ∈ [0, +∞). We assume x i (t) is bounded for any t ∈ [0,t 3], and

$${p_{\max }} = \mathop {\max }\limits_{t \in [0,{t_3}]} \sum\limits_{i = 1}^n {({x_i}(} t) - x_i^*{)^2}.$$
(22)

Then, for any t ∈ [t 3, +∞], if there exists γ i satisfying condition (18), x(t) asymptotically approaches x* as t → +∞.

Theorem 1. For the continuous chaotic system (4) with accumulative error control in the form of (5), x(t) is bounded for any t ∈ [0,t 3]. If there exist = 1, 2, ⋯,n which meet condition (18) for any t ∈ [t 3, +∞], then x(t) is asymptotically stable at x * starting from any initial point \(x(0) \in {\Phi _{\sqrt {\sigma {p_{\max }}} }}\) At the same time, ε arrives at a negative constant vector ε *.

Therefore, the essence of chaos control is the negative feedback. The convergence factor γ i in (6) satisfying condition (18) assures the negative feedback. The chaotic systems that can be stabilized by the control strength (7) are also stabilized by (6). However, (6) has a simpler structure than (7), and it is easy to implement in practice. It can be directly extended to the time-varying system in the form of \(\dot x = f(x,t)\), which will be illustrated in the following simulations.

In particular, some initial points which are far from the fixed point and generate unbounded trajectories through chaotic systems can be stabilized by this control method. It indicates there exists a large attractive region of x *, which implies that such chaos control is quite robust against the effect of noise. It will also be explained in the following simulations.

In what follows, we are to discuss some characteristics of ε*.

Setting e = xx*, we linearize (4) at the equilibrium point x * and get

$$\dot e = (D{f_{{x^*}}} + {G^*})e$$
(23)

where \(D{f_{{x^*}}} = {\textstyle{{\partial f} \over {\partial x}}}{|_{x = {x^*}}}\) is the Jacobi matrix at x*, and \({G^*} = {\rm{diag}}\{ \varepsilon _1^*,\varepsilon _2^*, \cdots, \varepsilon _n^*\} \)

As system (1) at x * is unstable, not all eigenvalues of Df x * have negative real parts. From Theorem 1, we know \(\varepsilon _i^* < 0\). Therefore, according to the linear control theory and linear algebra, we have the following remarks.

  • Remark 3. All eigenvalues of \(D{f_{{x^*}}} + {G^*}\) lies in the left of corresponding eigenvalues of \(D{f_{{x^*}}}\). Under the premise of convergence, the larger the absolute value of γ i is, the more the eigenvalues of linearized system matrix are moved to the left. If the absolute of γ i is large enough, all eigenvalues of \(D{f_{{x^*}}} + {G^*}\) have negative real parts. Therefore, the effect of the controlling term is to make the eigenvalues of the system matrix move to the left in the complex plane.

  • Remark 4. The final matrix ε* can be used as a linear feedback constant for the control of continuous chaotic systems.

As the stability of the controller (7) has been proved in those papers[9, 11, 13, 1921], from (18), there exists r i satisfying

$${r_i}\int_0^t {{{({x_i} - x_i^*)}^2}} {\rm{d}}t > \sum\limits_{i = 1}^n {{l_i}} (\Phi).$$
(24)

It can be seen from (24) that r i cannot be arbitrarily small. However, if r i is too large, the trajectory produced by system (4) may be unbounded. Therefore, r i has a range for convergence, too.

It is easily concluded that Theorem 1, Remark 3 and Remark 4 are also true for the control strength (7).

3 Two methods for choosing convergence factor

Although we have verified the existence of γ i , it is difficult to confirm their special values for different initial points. The numerical simulation and estimation of various characteristics remains the main method to study the chaotic systems[2]. Therefore, we choose the values of γ i by the following two methods.

The first method adopts the gradient descent algorithm to search γ i online or off-line. According to the first-order sampling system of (4), we have

$$\matrix{{{x_i}(k + 1) = {x_i}(k) + [{f_i}(x(k)) + {\varepsilon _i}({x_i}(k) - x_i^*)]\Delta t = } \hfill \cr {{x_i}(k) + [{f_i}(x(k)) + } \hfill \cr {{\gamma _i}\sum\limits_{j = 1}^{k - 1} {({x_i}(} j) - x_i^*)\Delta t({x_i}(k) - x_i^*)]\Delta t} \hfill \cr }$$
(25)

where Δt denotes the time interval. We set the error criterion function of each component as

$${q_i}(k + 1) = {1 \over 2}{({x_i}(k + 1) - x_i^*)^2}.$$
(26)

From the gradient descent algorithms, we get

$$\matrix{{{\gamma _i}(k + 1) = {\gamma _i}(k) - {\eta _i}{{\partial {q_i}} \over {\partial {\gamma _i}}} = } \hfill \cr {{\gamma _i}(k) - {\eta _i}({x_i}(k + 1) - x_i^*)({x_i}(k) - x_i^*)\Delta t \times } \hfill \cr {\left({\sum\limits_{j = 1}^{k - 1} {({x_i}(} j) - x_i^*)\Delta t} \right)} \hfill \cr }$$
(27)

where η i is a positive constant. For simplicity, the initial values of γ i are zero.

The second method is the rough estimation and trialand-error method. It is a simple yet reliable strategy in order to choose the appropriate γ i . We simplify condition (18) as \({\gamma _i}\int_0^t {({x_i} - x_i^*)} {\rm{d}}t < 0\) according to negative feedback. For an arbitrary initial point x(0), we choose γ i according to

$${\gamma _i}({x_i}(0) - x_i^*)\Delta t < 0.$$
(28)

If the trajectory escapes to infinity, it indicates the absolute value of γ i is too small to satisfy condition (18), and we should increase the absolute value of γ i or employ (19). However, if the absolute value of γ i is large enough to cause oscillation, the convergence time is likely to increase. It is recommended that different values of γ i be chosen for different initial points to achieve the best convergence effect.

4 Numerical simulations

In this section, we adopt three examples to show the effectiveness of this method and verify the above theoretical results.

Example 1. The Rossler system.

The Rossler system[21, 23] is depicted as

$$\left\{ {\matrix{{{{\dot x}_1} = - ({x_2} + {x_3})} \hfill \cr {{{\dot x}_2} = {x_1} + \alpha {x_2}} \hfill \cr {{{\dot x}_3} = \beta + {x_3}({x_1} - \tau)} \hfill \cr } } \right.$$

where x 1,x 2,x 3,α,β,τR. The system with α = 0.2, β = 0.2, τ = 5.7 is chaotic. Its unstable fixed points are (0.007, − 0.0351, 0.0351) and (5.693, −28.4648, 28.4648). We utilize the fourth-order Runge-Kutta scheme with Δt = 10−3. The initial point is (200, −200, 200).

When (0.007, −0.0351, 0.0351) is the controlling goal, according to (28), we choose different convergence factors and get the phase diagram, trajectory diagram and the curves of ε 1,ε 2,ε 3 as shown in Figs. 1 and 2, respectively. It can be seen that ε(t) satisfies condition (18) from t> 0.06 in Fig. 1 (c) and that from t> 0.01 in Fig. 2 (c).

Fig. 1
figure 1

The Rossler system is driven to (0.007, −0.0351, 0.0351) starting from (200, −200, 200) with γ1 = γ3 = −2 and γ2 = 2

Fig. 2
figure 2

The Rossler system is driven to (0.007, −0.0351, 0.0351) starting from (200, −200, 200) with γ1 = γ3 = −2000 and γ2 = 2000

In Fig. 1, the convergence time is 0.8and ε* = [−18.4213, −28.0899, −262.3980]T while they are 0.017 and ε* = [−1251.7, −1250.5, −1547.6]T in Fig. 2. The eigenvalues of \(D{f_{{x^*}}} + {G^*}\) are −18.5283, −27.7831, −268.0909 and −1251 + 0.7i, −1251 −0.7i, −1553.3, respectively, both of which have negative real parts. Compared with the eigenvalues of \(D{f_{{x^*}}}\), which are 0.0970 + 0.9952 i, 0.0970 −0.9952 i, −5.6870, it is easily concluded that under the premise of convergence for the same initial point, the larger the absolute value of γ i is, the more the eigenvalues of the linearized system matrix are moved to the left, and the faster the convergence is. There exists a wide range of γ i to guarantee the convergence of system like this example.

As for the fixed point (5.693, −28.4648, 28.4648), we choose γ1 = γ3 = −20, γ2 = 20accordingto(28). The trajectory x(t) reaches the fixed point at t = 0.19, and ε(t) arrives at (−87.8591, −84.3045, −338.6637) satisfying condition (18) from t > 0.03, as shown in Fig. 3. All of the eigenvalues of \(D{f_{{x^*}}} + {G^*}\) have negative real parts, too.

Fig. 3
figure 3

The Rossler system is driven to (5.693, −28.4648, 28.4648) starting from (200, −200, 200) with γ1 = γ3 = −20 and γ2 = 20

To display the robustness of the proposed method, we add a uniformly distributed random noise in the range [−20, 20] to x(t). Fig. 4 indicates that x(t) eventually approaches the fixed point and ultimately fluctuates slightly around it.

Fig. 4
figure 4

The Rossler system is driven to (0.007, −0.0351, 0.0351) starting from (2, −10, 20) with γ1 = γ3 = −20 and γ2 = 20

In order to illustrate the differences of (6) and (7), we adopt (7) to simulate the above example. The trajectory from the initial point (200, −200, 200) diverges as shown in Fig. 5. We find r i cannot be arbitrarily large. As for the initial point which is far from x*, especially the absolute value of error is much larger than 1, the convergence of (7) is faster and r i cannot be too large while for the initial points near x *, the convergence of (6) is faster as shown in Fig. 6. The gains of convergence factor of γ i and r i are the same, but the sign of γ i is decided by condition (28).

Fig. 5
figure 5

The Rossler system is driven to (5.693, −28.4648, 28.4648) by the control strength (7) starting from (200, −200, 200) with r 1 = r 2 = r 3 = 30

Fig. 6
figure 6

The Rossler system is driven to (0.007, −0.0351, 0.0351) by the control strengths (6) and (7) respectively starting from (0, 0, 0)

Example 2. Lorenz system.

We consider the Lorenz system[13, 24]

$$\left\{ {\matrix{{{{\dot x}_1} = {\theta _1}(- {x_1} + {x_2})} \hfill \cr {{{\dot x}_2} = {\theta _2}{x_1} - {x_2} - {x_1}{x_3}} \hfill \cr {{{\dot x}_3} = {x_1}{x_2} - {\theta _3}{x_3}} \hfill \cr } } \right.$$

and the corresponding receiver system

$$\left\{ {\matrix{{{{\dot y}_1} = {\theta _1}(- {y_1} + {y_2}) + {\varepsilon _1}({y_1} - {x_1})} \hfill \cr {{{\dot y}_2} = {\theta _2}{y_1} - {y_2} - {y_1}{y_3} + {\varepsilon _2}({y_2} - {x_2})} \hfill \cr {{{\dot y}_3} = {y_1}{y_2} - {\theta _3}{y_3} + {\varepsilon _3}({y_3} - {x_3})} \hfill \cr } } \right.$$

with the control strength (6), where x 1, x 2, x 3, θ 1, θ 2, θ 3R. Let \({\theta _1} = 10,{\theta _2} = 28,{\theta _3} = {\textstyle{8 \over 3}}\) and ε 1 = ε 3 = 0. We repeat the simulation in [13] and obtain the synchronization of two Lorenz systems as shown in Fig. 7. It can be seen that the fastest of convergence is (a), and the smallest oscillation is (b), which indicate the control strength (6) is superior to (7) in many aspects.

Fig. 7
figure 7

The synchronization of two Lorenz systems achieved by only the signal x 2. Here the initial values of (x;y) are setas (2, 3, 7; 3, 4, 8)

Example 3. A simple pendulum.

We consider a simple planar pendulum[20] whose motion is governed by

$$\left\{ {\matrix{{{{\dot x}_1} = {x_2}} \hfill \cr {{{\dot x}_2} = - \sin ({x_1})(1 - \sin (t)).} \hfill \cr } } \right.$$

The unstable fixed point is (0, 0) and the initial values are (0.2, 0.1). The controlled system is

$$\left\{ {\matrix{{{{\dot x}_1} = {x_2} + {\varepsilon _1}({x_1} - 0)} \hfill \cr {{{\dot x}_2} = - \sin ({x_1})(1 - \sin (t)) + {\varepsilon _2}({x_2} - 0).} \hfill \cr } } \right.$$

According to (19) and (28), we employ γ1 = − 100 sgn(x 1 − 0) and γ2 = − 100 sgn(x 2 − 0). The numerical results in Figs. 8 and 9 show that the system is stabilized to the fixed point (0,0) by the control strengths (6) and (7), respectively. This implies that the control strengths (6) is faster than (7). At the same time, the example indicates control strength (6) and (7) can be applied to \(\dot x = f(x,t)\) when f(x, t) is bounded for t.

Fig. 8
figure 8

The pendulum system is driven to (0, 0) by the control strength (6) with γ1 = −x 1 −0) and γ2 = −100 sgn(x 2 − 0)

Fig. 9
figure 9

The pendulum system is driven to (0, 0) by the control strength (7) with r 1 = r 2 = 100

5 Conclusions

We present the adaptive control method of accumulative error (AEC), which is quite robust against the effect of noise. The question about how the convergence factor affects the convergence is studied, and the property of the final control strength is given. As the control strength is a linear function of accumulative error, which can be realized by ordinary RC circuits, we conclude that the proposed method is simpler and easier to implement in practice than many other adaptive methods.

In addition, the control idea can be generalized to discrete chaotic systems, but the choice of convergence factor is more difficult. The control strength changes with error, and reaches a constant vector, which makes the eigenvalues of linearized system matrix at the fixed point move to the left for continuous systems or move into the unit circle for discrete systems.