Abstract
We present an adaptive control scheme of accumulative error to stabilize the unstable fixed point for chaotic systems which only satisfies local Lipschitz condition, and discuss how the convergence factor affects the convergence and the characteristics of the final control strength. We define a minimal local Lipschitz coefficient, which can enlarge the condition of chaos control. Compared with other adaptive methods, this control scheme is simple and easy to implement by integral circuits in practice. It is also robust against the effect of noise. These are illustrated with numerical examples.
Similar content being viewed by others
1 Introduction
Since Ott et al.[1] firstly proposed the method of chaos control in 1990, chaos control has attracted great attention and lots of successful experiments have been reported[2–17], such as feedback control[5, 6], impulsive control[7], back-stepping method[8], adaptive control[9–14], sliding mode control[15, 16], fuzzy control[17], finite-time control and so on. However, most methods aim at a kind of chaotic system and they are complex both in design and implementation. How to realize the control of chaotic systems by designing a simple, physically available and general controller is particularly significant both for theoretical research and practical applications[18].
One practicable method is the linear feedback, but it is difficult to find the suitable feedback constant[10]. Recently, Huang proposed an adaptive feedback control method to effectively synchronize two almost arbitrary identical chaotic systems in his series of papers[9, 13, 19–21]. However, a uniform Lipchitz condition on the nonlinear vector field is always assumed in advance, and these papers could not guarantee the feasibility of the adaptive control techniques (ACT, where the control strength \({\dot \varepsilon_i} = - {r_i}{e_i}^2,i = 1,2, \cdots, n\) i = −r i e i 2, i = 1, 2, ⋯,n with e i being the error) in those non-uniform Lipschitz systems[11]. Lin[11] generalized the ACT to the systems which only satisfy local Lipschitz condition. In fact, the above methods[9, 11, 13, 19–21] are speed-gradient methods in essence. The initial value of the control strength ε i is difficult to choose, and the square of error is not easy to implement in practice.
Besides, they all did not study the effect of the convergence factor. In fact, the convergence factor has a range, and the choice of convergence factor is of great importance to ensure a limited convergence time, which has great practical significance. It also affects the property of final control strength that helps us understand the essence of chaos control. However, there are few research on how the convergence factor affects the convergence time and the property of the final control strength.
Inspired by these discussions, we represent a novel control strength calculated by accumulative error without difficulty in choosing initial values, and discuss how the convergence factor affects the convergence of systems and the characteristics of the final control strength. The rest of the paper is organized as follows. The algorithm of accumulative error and its mathematical proof are given in Section 2. In Section 3, two methods of choosing convergence factor are discussed. The control scheme is applied to three examples in Section 4. Finally, the conclusions are drawn in Section 5.
2 Control scheme of accumulative error
Considering the following n-dimensional continuous chaotic system,
where x =[x 1,x 2, ⋯, x n ]T ∈ R n is the state variable, and f(x) = [f 1 (x),f 2(x), ⋯,f n (x)] ∈ R n is n-dimensional continuous differentiable nonlinear vector function which satisfies local Lipchitz condition, i.e., for any compact set S, there exists a positive constant l i (S) such that
where ∥·∥ denotes the Euclidean norm and the existence of l i (S) is related to the choice of the compact set S. There exists a minimal coefficient l (x,y) for a certain point (x,y).
Definition 1. For a certain point (x,y), the minimal local Lipchitz coefficient l (x,y) is a constant satisfying
The minimal local Lipchitz coefficient l (x,y) is a function of (x,y), and it may be negative, positive or zero. Its upper bound is
Systems in the form of (1) satisfying (2, 3) include all the well-known chaotic and hyperchaotic systems.
The unstable fixed point is denoted as \({x^*} = {[x_1^*,x_2^*, \cdots, x_n^*]^{\rm{T}}}\) which satisfies f(x*) = 0. To stabilize this equilibrium, we design the controlled system as
where γ i is the convergence factor. It may be negative or positive. The value of γ i is related to initial points and decides the convergence time.
Because of the negative feedback, the initial value of ε(t) is a non-positive constant vector with a small absolute value. For simplicity, setting ε(0) = 0, we get the control strength from (5) as
It means the controlling term changes with error, starting from zero and ending with zero. It can be regarded as a non-invasive control. It is the product of error and its integral in essence, but not the proportional integral (PI) control. It is significantly different from the usual linear feedback in [5, 12], where ε i is a constant or piecewise constant. Especially, the fixed control strength is used in linear feedback wherever the initial points start, thus the strength must be maximal, which means a kind of waste in practice[13].
Definition 2. The control strength ε i (t) in (6) is a linear function of the integration of error (accumulative error in essence) in the past time. Therefore, we call this control method (4, 5, 6) as accumulative error control (AEC).
The control strength (6) is different from the following control strength in [11, 13, 14],
where r i is an arbitrarily positive constant.
-
Remark 1. The control strength (6) is a linear function of accumulative error, which possesses certain physical meaning in real systems and can be realized by integral circuits, such as ordinary resistance-capacitance (RC) circuits. Therefore, it is simpler and much easier to implement than (7).
-
Remark 2. The control strength (6) does not require calculation of initial value, while the initial value of ε i (t) in (7) is difficult to choose[11].
In fact, the control strength (7) can be deduced by the speed-gradient method. We take the error criterion function as
and its time derivative is
According to the speed-gradient method, the change of ε i is opposite to the gradient direction of \({{{\rm{d}}Q(x,t)} \over {{\rm{d}}t}}\) so we obtain the adaption rule
Therefore, the control strength (6), rather than (7), is a novel control method. Next we will verify its stability.
Noticing that the boundedness of every trajectory produced by system (4) is crucial to the achievement of chaos control, we are to investigate under what kind of condition those trajectories of system (4) are always bounded.
We introduce a Lyapunov function as
Select an arbitrary initial point
Set a constant as
and construct a closed ball by
where σ is an arbitrarily positive constant lager than 1. It follows from (3) that there exists a minimal local Lipchitz coefficient l(x,x*) for any \(x \in {\Phi _{\sqrt {\sigma p} }}({x^*})\) satisfying
Due to the local Lipschitz condition, the solution of system (4), denoted by x(t) = x(t;0,x(0)), is unique on its maximal existent interval [0,t 1). Next, it is to prove that the ending point t 1 is infinity. First, we claim that \(x(t) \in {\Phi _{\sqrt {\sigma p} }}({x^*})\) for any t ∈ [0,t 1], and we perform the reasoning by contradiction. If this claim is not true, there exists a time constant
which is the first time trajectory x(t) hits the boundary of the ball. The derivative of function V(x, t) for t ∈ [0,t 2] is
under the condition that
For any t ∈ [0, t 2],each x i (t) is bounded, so there exists γ i satisfying
where −l(x,x*) is shorted as the opposite of Lipchitz coefficient. It is a function of variable x. The range of ε i < −l(x,x*) is larger than that of \({\varepsilon _i} < - \sum\limits_{i = 1}^n {{l_i}} (\Phi)\), which enlarges the condition of chaos control.
In fact, the sign of γ i makes ε i < 0, and the value of γ i makes ε i < −l(x,x*). Although ε i and l(x, x*) vary with \(x,\,{\varepsilon _i}(t) = {\gamma _i}\int_0^t {({x_i} - x_i^*)} {\rm{d}}t\) dt is the integration of error and the sign of \({({x_i} - x_i^*)}\) changes at some moments may not affect the truth of ε i < −l(x,x*). However, for the situation that the sign of \({({x_i} - x_i^*)}\) change frequently, such as some trigonometric functions, we adopt
where γ i0 is a constant. Therefore, the value of γ i with an appropriate γ i is always less than that of −l(x,x*) for the same variable x.
Obviously, it follows that V(x(t),t) is decreasing on [0,t 2], which leads to
This is a contradiction, which consequently validates our assertion. Trajectory \(x(t){|_{[0,{t_1}]}}\), starting inside the bounded ball, does not leave this ball yet. Thus, by virtue of the theory of ordinary differential equations, we can extend the ending point t 1 to +∞, and consequently prove that x(t)∣[0,+∞) is bounded under condition (18).
The largest invariant set of
only has one element x = x *. In light of the LaSalle invariance principle[22], we conclude that trajectory x(t) starting from x(0) surely approaches x* as t → +∞. Therefore, the controlled system (4) is asymptotically stable at x *. It means there exists a finite time t f such that ∥ x − x* ∥<δ for t ∈ [t f , +∞), where δ is an arbitrarily small positive constant or accuracy demand. Therefore, ε i (t) approaches a negative constant \(\varepsilon _i^*\) at \(t = {t_f}\).
Furthermore, condition (18) is not necessarily true for any t ∈ [0, +∞). We assume x i (t) is bounded for any t ∈ [0,t 3], and
Then, for any t ∈ [t 3, +∞], if there exists γ i satisfying condition (18), x(t) asymptotically approaches x* as t → +∞.
Theorem 1. For the continuous chaotic system (4) with accumulative error control in the form of (5), x(t) is bounded for any t ∈ [0,t 3]. If there exist = 1, 2, ⋯,n which meet condition (18) for any t ∈ [t 3, +∞], then x(t) is asymptotically stable at x * starting from any initial point \(x(0) \in {\Phi _{\sqrt {\sigma {p_{\max }}} }}\) At the same time, ε arrives at a negative constant vector ε *.
Therefore, the essence of chaos control is the negative feedback. The convergence factor γ i in (6) satisfying condition (18) assures the negative feedback. The chaotic systems that can be stabilized by the control strength (7) are also stabilized by (6). However, (6) has a simpler structure than (7), and it is easy to implement in practice. It can be directly extended to the time-varying system in the form of \(\dot x = f(x,t)\), which will be illustrated in the following simulations.
In particular, some initial points which are far from the fixed point and generate unbounded trajectories through chaotic systems can be stabilized by this control method. It indicates there exists a large attractive region of x *, which implies that such chaos control is quite robust against the effect of noise. It will also be explained in the following simulations.
In what follows, we are to discuss some characteristics of ε*.
Setting e = x − x*, we linearize (4) at the equilibrium point x * and get
where \(D{f_{{x^*}}} = {\textstyle{{\partial f} \over {\partial x}}}{|_{x = {x^*}}}\) is the Jacobi matrix at x*, and \({G^*} = {\rm{diag}}\{ \varepsilon _1^*,\varepsilon _2^*, \cdots, \varepsilon _n^*\} \)
As system (1) at x * is unstable, not all eigenvalues of Df x * have negative real parts. From Theorem 1, we know \(\varepsilon _i^* < 0\). Therefore, according to the linear control theory and linear algebra, we have the following remarks.
-
Remark 3. All eigenvalues of \(D{f_{{x^*}}} + {G^*}\) lies in the left of corresponding eigenvalues of \(D{f_{{x^*}}}\). Under the premise of convergence, the larger the absolute value of γ i is, the more the eigenvalues of linearized system matrix are moved to the left. If the absolute of γ i is large enough, all eigenvalues of \(D{f_{{x^*}}} + {G^*}\) have negative real parts. Therefore, the effect of the controlling term is to make the eigenvalues of the system matrix move to the left in the complex plane.
-
Remark 4. The final matrix ε* can be used as a linear feedback constant for the control of continuous chaotic systems.
As the stability of the controller (7) has been proved in those papers[9, 11, 13, 19–21], from (18), there exists r i satisfying
It can be seen from (24) that r i cannot be arbitrarily small. However, if r i is too large, the trajectory produced by system (4) may be unbounded. Therefore, r i has a range for convergence, too.
It is easily concluded that Theorem 1, Remark 3 and Remark 4 are also true for the control strength (7).
3 Two methods for choosing convergence factor
Although we have verified the existence of γ i , it is difficult to confirm their special values for different initial points. The numerical simulation and estimation of various characteristics remains the main method to study the chaotic systems[2]. Therefore, we choose the values of γ i by the following two methods.
The first method adopts the gradient descent algorithm to search γ i online or off-line. According to the first-order sampling system of (4), we have
where Δt denotes the time interval. We set the error criterion function of each component as
From the gradient descent algorithms, we get
where η i is a positive constant. For simplicity, the initial values of γ i are zero.
The second method is the rough estimation and trialand-error method. It is a simple yet reliable strategy in order to choose the appropriate γ i . We simplify condition (18) as \({\gamma _i}\int_0^t {({x_i} - x_i^*)} {\rm{d}}t < 0\) according to negative feedback. For an arbitrary initial point x(0), we choose γ i according to
If the trajectory escapes to infinity, it indicates the absolute value of γ i is too small to satisfy condition (18), and we should increase the absolute value of γ i or employ (19). However, if the absolute value of γ i is large enough to cause oscillation, the convergence time is likely to increase. It is recommended that different values of γ i be chosen for different initial points to achieve the best convergence effect.
4 Numerical simulations
In this section, we adopt three examples to show the effectiveness of this method and verify the above theoretical results.
Example 1. The Rossler system.
The Rossler system[21, 23] is depicted as
where x 1,x 2,x 3,α,β,τ ∈ R. The system with α = 0.2, β = 0.2, τ = 5.7 is chaotic. Its unstable fixed points are (0.007, − 0.0351, 0.0351) and (5.693, −28.4648, 28.4648). We utilize the fourth-order Runge-Kutta scheme with Δt = 10−3. The initial point is (200, −200, 200).
When (0.007, −0.0351, 0.0351) is the controlling goal, according to (28), we choose different convergence factors and get the phase diagram, trajectory diagram and the curves of ε 1,ε 2,ε 3 as shown in Figs. 1 and 2, respectively. It can be seen that ε(t) satisfies condition (18) from t> 0.06 in Fig. 1 (c) and that from t> 0.01 in Fig. 2 (c).
In Fig. 1, the convergence time is 0.8and ε* = [−18.4213, −28.0899, −262.3980]T while they are 0.017 and ε* = [−1251.7, −1250.5, −1547.6]T in Fig. 2. The eigenvalues of \(D{f_{{x^*}}} + {G^*}\) are −18.5283, −27.7831, −268.0909 and −1251 + 0.7i, −1251 −0.7i, −1553.3, respectively, both of which have negative real parts. Compared with the eigenvalues of \(D{f_{{x^*}}}\), which are 0.0970 + 0.9952 i, 0.0970 −0.9952 i, −5.6870, it is easily concluded that under the premise of convergence for the same initial point, the larger the absolute value of γ i is, the more the eigenvalues of the linearized system matrix are moved to the left, and the faster the convergence is. There exists a wide range of γ i to guarantee the convergence of system like this example.
As for the fixed point (5.693, −28.4648, 28.4648), we choose γ1 = γ3 = −20, γ2 = 20accordingto(28). The trajectory x(t) reaches the fixed point at t = 0.19, and ε(t) arrives at (−87.8591, −84.3045, −338.6637) satisfying condition (18) from t > 0.03, as shown in Fig. 3. All of the eigenvalues of \(D{f_{{x^*}}} + {G^*}\) have negative real parts, too.
To display the robustness of the proposed method, we add a uniformly distributed random noise in the range [−20, 20] to x(t). Fig. 4 indicates that x(t) eventually approaches the fixed point and ultimately fluctuates slightly around it.
In order to illustrate the differences of (6) and (7), we adopt (7) to simulate the above example. The trajectory from the initial point (200, −200, 200) diverges as shown in Fig. 5. We find r i cannot be arbitrarily large. As for the initial point which is far from x*, especially the absolute value of error is much larger than 1, the convergence of (7) is faster and r i cannot be too large while for the initial points near x *, the convergence of (6) is faster as shown in Fig. 6. The gains of convergence factor of γ i and r i are the same, but the sign of γ i is decided by condition (28).
Example 2. Lorenz system.
We consider the Lorenz system[13, 24]
and the corresponding receiver system
with the control strength (6), where x 1, x 2, x 3, θ 1, θ 2, θ 3 ∈ R. Let \({\theta _1} = 10,{\theta _2} = 28,{\theta _3} = {\textstyle{8 \over 3}}\) and ε 1 = ε 3 = 0. We repeat the simulation in [13] and obtain the synchronization of two Lorenz systems as shown in Fig. 7. It can be seen that the fastest of convergence is (a), and the smallest oscillation is (b), which indicate the control strength (6) is superior to (7) in many aspects.
Example 3. A simple pendulum.
We consider a simple planar pendulum[20] whose motion is governed by
The unstable fixed point is (0, 0) and the initial values are (0.2, 0.1). The controlled system is
According to (19) and (28), we employ γ1 = − 100 sgn(x 1 − 0) and γ2 = − 100 sgn(x 2 − 0). The numerical results in Figs. 8 and 9 show that the system is stabilized to the fixed point (0,0) by the control strengths (6) and (7), respectively. This implies that the control strengths (6) is faster than (7). At the same time, the example indicates control strength (6) and (7) can be applied to \(\dot x = f(x,t)\) when f(x, t) is bounded for t.
5 Conclusions
We present the adaptive control method of accumulative error (AEC), which is quite robust against the effect of noise. The question about how the convergence factor affects the convergence is studied, and the property of the final control strength is given. As the control strength is a linear function of accumulative error, which can be realized by ordinary RC circuits, we conclude that the proposed method is simpler and easier to implement in practice than many other adaptive methods.
In addition, the control idea can be generalized to discrete chaotic systems, but the choice of convergence factor is more difficult. The control strength changes with error, and reaches a constant vector, which makes the eigenvalues of linearized system matrix at the fixed point move to the left for continuous systems or move into the unit circle for discrete systems.
References
E. Ott, C. Grebogi, J. A. Yorke. Controlling chaos. Physical Review Letters, vol. 64, no. 11, pp. 1196–1199, 1990.
B. R. Andrievskii, A. L. Fradkov. Control of chaos: Methods and applications. I. Methods. Automation and Remote Control, vol. 64, no. 5, pp. 673–713, 2003.
A. L. Fradkov, R. J. Evans, A. L. Fradkov. Control of chaos: Methods and applications in engineering. Annual Reviews in Control, vol. 29, no. 1, pp. 33–56, 2005.
S. A. Sarra, C. Meador. On the numerical solution of chaotic dynamical systems using extend precision floating point arithmetic and very high order numerical methods. Nonlinear Analysis: Modelling and Control, vol. 16, no. 3, pp. 340–352, 2011.
H. Richter, K. J. Reinschke. Local control of chaotic systems-A Lyapunov approach. International Journal of Bifurcation and Chaos, vol. 8, no. 7, pp. 1565–1573, 1998.
T. Li, A. G. Song, S. M. Fei. Master-slave synchronization for delayed Lur’e systems using time-delay feedback control. Asian Journal of Control, vol. 13, no. 6, pp. 879–892, 2011.
D. Chen, J. Sun, C. Huang. Impulsive control and synchronization of general chaotic system. Chaos, Solitons & Fractals, vol. 28, no. 1, pp. 213–218, 2006.
H. Wang, X. J. Zhu, Z. Z. Han, S. W. Gao. A new stepping design method and its application in chaotic Systems. Asian Journal of Control, vol. 14, no. 1, pp. 230–238, 2012.
D. B. Huang. Stabilizing near-nonhyperbolic chaotic systems with applications. Physical Review Letters, vol. 93, no. 21, Article. 214101, 2004.
R. W. Guo. A simple adaptive controller for chaos and hyper-chaos synchronization. Physics Letters A, vol. 372, no. 17, pp. 5593–5597, 2008.
W. Lin. Adaptive chaos control and synchronization in only locally Lipschitz systems. Physics Letters A, vol. 372, no. 18, pp. 3195–3200, 2008.
A. B. Huberman, E. Lumer. Dynamics of adaptive systems. IEEE Transactions on Circuits and Systems, vol. 37, no. 4, pp. 547–550, 1990.
D. Huang. Simple adaptive-feedback controller for identical chaos synchronization. Physical Review E, vol. 71, no. 3, Article. 037203, 2005.
G. Chen. A simple adaptive feedback control method for chaos and hyper-chaos control. Applied Mathematics and Computation, vol. 217, no. 17, pp. 7258–7264, 2011.
R. F. Zhang, D. Y. Chen, J. G. Yang, J. Wang. Anti-synchronization for a class of multi-dimensional autonomous and non-autonomous chaotic systems on the basis of the sliding mode with noise. Physica Scripta, vol. 85, no. 10, pp. 065006, 2012.
M. C. Pai. Adaptive sliding mode observer-based synchronization for uncertain chaotic systems. Asian Journal of Control, vol. 14, no. 3, pp. 736–743, 2012.
K. Tanaka, T. Ikeda, O. H. Wang. A unified approach to controlling chaos via an LMI-Based fuzzy control system design. IEEE Transactions on Circuits Systems I: Fundamental Theory and Applications, vol. 45, no. 10, pp. 1021–1040, 1998.
N. J. Corron, S. D. Pethel, B. A. Hopper. Controlling chaos with simple limiters. Physical Review Letters, vol. 84, no. 17, pp. 3835–3838, 2000.
D. Huang. Synchronization-based estimation of all parameters of chaotic systems from time series. Physical Review E, vol. 69, no. 6, pp. 067201, 2004.
D. Huang. Adaptive-feedback control algorithm. Physical Review E, vol. 73, no. 6, pp. 066204, 2006.
D. Huang. Synchronization in adaptive weighted networks. Physical Review E, vol. 74, no. 4, pp. 046208, 2006.
J. P. LaSalle. Stability theory for ordinary differential equations. Journal of Differential Equations, vol. 4, no. 1, pp. 57–65, 1968.
O. E. Rossler. An equation for continuous chaos. Physics Letters A`, vol. 57, no. 5, pp. 397–398, 1976.
E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130–141, 1963.
Author information
Authors and Affiliations
Corresponding author
Additional information
The work was supported by National Nature Science Foundation of China (Nos. 61273088, 10971120 and 61001099) and Nature Science Foundation of Shandong Province (No. ZR2010FM010).
Rights and permissions
About this article
Cite this article
Zhang, FF., Liu, ST. & Liu, KX. Adaptive Control of Accumulative Error for Nonlinear Chaotic Systems. Int. J. Autom. Comput. 11, 527–535 (2014). https://doi.org/10.1007/s11633-014-0821-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11633-014-0821-6