Almost sure stabilization of hybrid systems by feedback control based on discrete-time observations of mode and state

Although the mean square stabilization of hybrid systems by feedback control based on discretetime observations of state and mode has been studied by several authors since 2013, the corresponding almost sure stabilization problem has received little attention. Recently, Mao was the first to study the almost sure stabilization of a given unstable system ẋ(t) = f(x(t)) by a linear discrete-time stochastic feedback control Ax([t/τ]τ)dB/(t) (namely the stochastically controlled system has the form dx(t) = f(x(t))dt + Ax([t/τ]τ)dB/(t), where B(t) is a scalar Brownian, τ > 0, and [t/τ] is the integer part of t/τ. In this paper, we consider a much more general problem. That is, we study the almost sure stabilization of a given unstable hybrid system ẋ(t) = f(x(t), r(t)) by nonlinear discrete-time stochastic feedback control u(x([t/τ]τ)dB(t) (so the stochastically controlled system is a hybrid stochastic system of the form dx(t) = f(x(t), r(t))dt + u(x([t/τ]τ))dB(t), where B(t) is a multi-dimensional Brownian motion and r(t) is a Markov chain.


Introduction
In recent years, stochastic systems have been considered by many researchers since many practical systems can be modeled using these kinds of systems. Many significant results for stochastic systems have been reported (see [1][2][3][4][5][6][7][8][9][10][11][12][13]). Markovian jump systems are a special class of hybrid stochastic systems, which can be found in some engineering systems including power systems, manufacturing systems, ecosystems, and so forth. The literature in this area is huge and lots of papers are open access, thus we only mention a few [14][15][16][17][18]. Shaikhet [19] provided the sufficient conditions of asymptotic mean square stability for Markovian systems with delay. Mao [20] discussed the problem of exponential stability of general nonlinear Markovian jump systems.
As is well known, a given unstable system can be stabilized by noise or noise can be used to make a system more stable when it is already stable. Arnold et al. [21] pointed out that a linear system can be stabilized by zero mean stationary parameter noise. In [22], a linear hybrid stochastic system was stabilized by Gaussian type noise. In addition, Khasminskii [23] proposed that a system was stabilized by using two types of white noise. It was shown in [24] that an unstable nonlinear system can be stabilized by Brownian motion provided the growth condition is linear. Mao [25] showed that any nonlinear systeṁ x(t) = f (x(t), t) whose coefficient satisfied the condition |f (x, t)| K|x|, K > 0, it was possible to use the Brownian motions to stabilize the system. It is worth noting that Appleby et al. [26] presented a general theory on the problem of stochastic stabilization for a nonlinear functional differential equation by noise. Mao et al. [27] developed an unstable Markovian jump systemẋ(t) = f (x(t), r(t), t) that can be stabilized by stochastic control and the partial subsystem was controlled. In other words, the space S of the Markov chain was divided into two proper subspaces S 1 and S 2 , i.e., S = S 1 ∪ S 2 . In summary, Mao et al. [27] considered the controlled stochastic system dx(t) = f (x(t), r(t), t)dt + u(r(t), t)dB(t), where u(i, t) = 0 for i ∈ S 1 while u(i, t) = u(i, x(t)) was a feedback control for i ∈ S 2 . New methods and sufficient conditions on the stochastic stabilization for Markovian jump systems were provided in [28].
With some applications, two examples on stabilization and destabilization by noise in the plane were presented in [29]. We should of course point out that the corresponding problem based on discrete-time state observations has already been studied by some authors. Recently, Mao [30] was the first to study this stabilization problem. He also obtained a bound τ * on τ for the controlled system to be stable as long as τ < τ * (plus some other conditions of course). Here τ > 0 is the duration between two consecutive observations. From the point of control cost, it is clearly better to have a larger τ * . Influenced by [30], a number of recent papers (e.g., [31,32]) have significantly improved the bound τ * . Mao et al. [31] established a better bound on τ * by considering a couple of important classes of hybrid stochastic systems and using their special features. On the other hand, a better bound on τ * was also obtained in [32] by making use of Lyapunov functionals. In particular, Song et al. [33] pointed out that the discrete-time feedback control in controlled hybrid stochastic systems was based on not only the discrete-time observations of the state, x(kτ ) (k = 0, 1, 2, . . .) but also it was still dependent on the discrete-time observations of the mode, r(kτ ), on k = 0, 1, 2, . . ..
Observing that all the papers mentioned above were concerned with the mean square stabilization by the discrete-time feedback control in the drift part, Mao [34] discussed the following almost sure exponential stabilization by discrete-time feedback control in the diffusion part. Given an unstable nonlinear systemẋ(t) = f (x(t)), Mao designed a feedback control Ax([t/τ ]τ ), based on the discrete-time state observations, in the diffusion part so that the corresponding closed-loop system was almost surely exponentially stable. Here B(t) was a scalar Brownian motion, f : for some α > 0 and f (0) = 0, and A was an n × n real-valued matrix such that |Ax| 2 ρ 1 |x| 2 and |x T Ax| 2 ρ 2 |x| 2 , ∀x ∈ R n , for some positive numbers ρ 1 and ρ 2 satisfying ρ 2 − 0.5ρ 1 > α. Mao [34] showed that there was a positive number τ * such that the controlled system (2) was almost surely exponentially stable provided that τ < τ * . To the best of the authors' knowledge, the problem of almost sure exponential stabilization for hybrid stochastic systems has received little attention, in particular, in the framework of stochastic feedback control based on discrete-time observations of mode and state.
These motivate us to consider the following more general problem: if the given unstable system is expressed as a hybrid stochastic systemẋ(t) = f (x(t), r(t)), can we design a discrete-time feedback control u(x([t/τ ]τ ), r([t/τ ]τ )), based on the discrete-time observations of both state and mode, in the diffusion part so that the following closed-loop system is almost surely exponentially stable? Here B(t) is an m-dimensional Brownian motion, r(t) is a Markov chain in a finite state space S while f : R n × S → R n and u : R n × S → R n×m . We highlight a number of key features.
• The stabilization is in the sense of almost sure exponential stability and there is far less known about this than the mean square stabilization.
• The control u is in the diffusion part while it is nonlinear and B(t) is multi-dimensional. • The controlled system is a hybrid stochastic delay system.
• The discrete-time feedback control u(x([t/τ ]τ ), r([t/τ ]τ )) is based on the discrete-time observations of both state and mode.
Let us begin to investigate this more general stabilization problem.

Preliminaries and notation
Throughout this paper, the notation is fairly standard. Here | · | denotes the Euclidean norm in R n . For a vector or matrix A, A T denotes its transpose and |A| = trace(A T A) represents the trace norm of matrix A. For a symmetric matrix A, λ max (A) and λ min (A) represent the largest and smallest eigenvalue, respectively. Here (Ω, F , {F t } t 0 , P) is a complete probability space, where {F t } t 0 satisfies the conditions that it is right continuous and F 0 contains all P-null sets. In addition, we use B(t), t 0, as an m-dimensional Brownian motion. The continuous-time Markov chain r(t), t 0, takes discrete values in a given finite set S = {1, 2, . . . , N } and has the generator Γ = (γ ij ) N ×N given by with ∆ > 0 and γ ii = − j =i γ ij . Here γ ij 0 denotes the transition rate from i to j. The notation π = (π 1 , π 2 , . . . , π N ) ∈ R 1×N represents a stationary (probability) distribution. Furthermore, one can find the linear equation πΓ = 0 subject to N j=1 π j = 1 and π j > 0 for all j ∈ S. In particular, we recall that −γ ii = j =i γ ij > 0. We state a lemma that estimates the probability of jumps.
To show this lemma, define the stopping time given r(t) = i, and let inf ∅ = ∞ where ∅ denotes the empty set as usual. It is well known (see [14]) that ζ i − t has the exponential probability distribution with parameter −γ ii . Hence, as desired. Consider the following unstable hybrid stochastic systeṁ with the initial conditions is the state, and r(t) is the mode. We are required to design a stochastic feedback control u( based on the observations of state x([t/τ ]τ ) and mode r([t/τ ]τ ) at the discrete times 0, τ, 2τ, . . . such that the corresponding closed-loop system becomes almost surely exponentially stable, where the positive constant τ > 0 denotes the duration between two consecutive observations, [t/τ ] is the integer part of t/τ , and u : R n × S → R n×m is the control input. For the existence and uniqueness of the solution to the controlled system, we impose the following assumption.
Assumption 1. There exist two positive constants K 1 and K 2 such that This assumption guarantees that for any initial state x(0) = x 0 ∈ R n and mode r(0) = r 0 ∈ S, the controlled system (8) has a unique solution x(t) on t ∈ R + and E|x(t)| 2 < ∞ for all t 0. In fact, for t ∈ [0, τ ], system (8) becomes with the initial state x(0) = x 0 and mode r(0) = r 0 . It is easy to show (see [27,Theorem 3.13]) that this hybrid stochastic system has a unique solution with the initial state x(τ ) and mode r(τ ) at t = τ . It is easy to see that this hybrid stochastic system has a unique solution x(t) on t ∈ [τ, 2τ ] with E|x(t)| 2 < ∞. Repeating this procedure, we can see what we have just claimed. Let us denote the solution by x(t; x 0 , r 0 ). We see easily show that if x 0 = 0, then x(t; 0, r 0 ) = 0 for all t 0 almost surely. This is known as the trivial solution.
The purpose of this paper is to find sufficient conditions on the coefficient f and the control input u as well as to obtain a positive bound τ * such that the controlled system (8) is almost surely exponentially stable as long as τ τ * . By the almost sure exponential stability, we mean that for any (x 0 , r 0 ) ∈ R n × S (see [7,8,23,25]). We also observe that when τ → 0, the controlled system (8) becomes the corresponding hybrid stochastic system on t 0 with the initial condition (y(0), r(0)) = (x 0 , r 0 ). Under Assumption 1, system (10) has a unique solution (see [17,20]). Denote the unique solution by y(t; x 0 , r 0 ) on t 0.

Main results
We see clearly from the discussion in the previous section that the conditions we need to impose should at least guarantee the almost sure exponential stability of the corresponding hybrid stochastic system (10).
Although there are many useful criteria on the almost sure exponential stability, we use that established by [20]. Accordingly, we impose the following assumptions.
Assumption 2. For each i ∈ S, there exist constant triples α i ∈ R, ρ i 0, and σ i 0 such that for all x ∈ R n . Setα = max i∈S α i andρ = max i∈S ρ i .
and α i , σ i , ρ i are the constants specified in Assumption 2.
Let us make some comments on these assumptions. First, we point out that Assumptions 1 and 2 force which meet the stability purpose in this paper. In fact, g(0, i) = 0 follows from the second inequality in (11). To show that f (0, i) = 0 for all i ∈ S, we assume otherwise that there were some i ∈ S such that z := f (0, i) = 0. Choose a constant b such that 0 < b < 1/(K 1 + |α i |) and let x = bz. Then, by the first inequality in (11), On the other hand, by Assumption 1, Hence, This implies that but this is a contradiction as b < 1/(K 1 + |a i |). We therefore must have (14). Recalling that y(t; x 0 , r 0 ) denotes the solution of the hybrid stochastic system (10), we can hence highlight a significant property given in Mao [20, Lemma 2.1], which then leads to That is, if any initial solution of system (10) is a nonzero state, almost all the trajectories of system (10) will never converge to the origin. Thus, Lyapunov functions can be chosen in a variety of ways.
We also emphasize that we are only interested in the case whenα 0 in this paper; otherwise, the given hybrid system (7) is already stable (see [28]) and there is no need to stabilize it using feedback control. We should also point out that we always haveα K 1 andρ K 2 , but we might haveα < K 1 andρ < K 2 in many cases. For example, consider the scalar case where f (x, i) = −x + a i sin(x) and u(x, i) = x − b i sin(x) where a i ∈ [1, 2], b i ∈ (0, 1], but a 1 = 2 and b 1 = 1. It is easy to see that K 1 = 3 and K 2 = 2. On the other hand, xf (x, i) = −x 2 + a i sin(x)x (a i − 1)|x| 2 soα = 1 K 1 . Moreover, 0 |u We also observe that once Assumption 2 holds, the verification of Assumption 3 depends very much on the choice of p ∈ (0, 1). In Appendix A, we give some easier conditions that guarantee the existence of such a p and, hence, for Assumption 3 to hold.
The following lemma shows that the corresponding hybrid stochastic system (10) is exponentially stable in the pth moment and, hence, by [27,Theorem 5.9], the hybrid stochastic system is also almost surely exponentially stable.
so all c i are positive by the theory of M -matrices [2,27] or by Lemma A1 in the Appendix A and let c min = min 1 i N c i and c max = max 1 i N c i . Then the solution of the hybrid stochastic system (10) satisfies for all (x 0 , r 0 ) ∈ R n × S, where γ = 1/c max and M = c max /c min . Proof. For x 0 = 0, that is y(t; 0, r 0 ) = 0, we can deduce that the assertion is natural. Fix x 0 = 0 and r 0 ∈ S arbitrarily and write y(t; x 0 , r 0 ) = y(t). Recalling (15), we have that y(t) = 0 for all t 0 almost surely. Define the Lyapunov function We can therefore apply the generalized Itô formula (see [27,Theorem 1.45]) to obtain that EV (y(t), t, r(t)) = V (x 0 , 0, r(0)) + E t 0 LV (y(s), s, r(s))ds, By Assumption 2 and then using definition (13) of θ i (p), we have However, by (16) and (12), Hence, we have LV (y, t, i) 0.

This implies
c min e γt E|y(t)| p c max |x 0 | p , which is the desired assertion (17). To simplify our notation, we let δ t = [t/τ ]τ for t 0 and set t k = kτ for k = 0, 1, 2, . . . from now on.
Lemma 3. Let Assumptions 1 and 2 hold. Then for any initial condition (x 0 , r 0 ) ∈ R n × S, and for all t 0.
Proof. Fix any (x 0 , r 0 ) ∈ R n × S and write x(t; x 0 , r 0 ) = x(t). By the Itô formula and Assumption 2, it follows that for t 0. As the second term on the right-hand side is increasing in t, we can obtain which implies the desire assertion (19) by the well-known Gronwall inequality. Moreover, by Assumptions 1 and 2, we further derive that This, together with (19), implies another assertion (20).
However, by Lemma 3, To estimate J 3 (t), we let κ = κ(t) = [t/τ ]. Then By Assumption 2, we can derive that, for t k s t ∧ t k+1 , However, by Lemma 1, Hence, Substituting this into (25), we obtain However, by Lemma 3, we then have Putting this into (28) gives Substituting (24) and (29) into (23), we obtain Using the well-known Gronwall inequality, we have Finally, we can obtain the desired assertion (21) by applying the Hölder inequality.
Lemma 5. Let Assumptions 1-3 hold. Choose a free parameter ε ∈ (0, 1). Letτ > 0 be the unique root to the equation where H(τ ) and γ, M have been given in Lemmas 2 and 4, respectively. Then, for each τ ∈ (0,τ ], there exists a pair of positive integerκ and number λ such that, for every initial condition (x 0 , r 0 ) ∈ R n × S, the solution of system (8) satisfies Proof. It is easy to see that the term on the left-hand side of (31) is a continuous increasing function of τ 0 and is equal to zero when τ = 0 whereas it tends to infinity as τ → ∞, thus Eq. (31) must have a unique rootτ > 0. Fix τ ∈ (0,τ ] and (x 0 , r 0 ) ∈ R n × S arbitrarily and write x(kτ ; x 0 , r 0 ) = x k for k = 0, 1, 2, . . .. Letκ be the smallest positive integer that is no less than log(M/ε) γτ , namely where γ and M have been defined in Lemma 2. This implies M e −γκτ ε.
Using (35) and Lemma 4, we obtain that Noting from (33) thatκτ < τ + log(M/ε)/γ, we have where Eq. (31) has been used. We may therefore write for some λ > 0. It then follows from (36) that Let us now discuss the solution x(t) of hybrid stochastic system (8) on t κτ . This can be regarded as the solution of the hybrid stochastic system (8) with initial condition (xκ, r(κτ )) at time t =κτ . Owing to the time-homogeneous property of hybrid stochastic system (8), we can thus easily show that
Repeating this procedure, we have as desired. The proof is, hence, complete. Now, we are in the position to present and prove our main results in this section.
An application of the Hölder inequality yields where C 1 = e (α+0.5ρ 2 )pκτ . This, together with Lemma 5, implies where C 2 = C 1 e λκτ . In other words, we have shown that the controlled system (8) is exponentially stable in the pth moment. However, this is not yet what we require.
In the remainder of this proof, we show that this pth moment exponential stability yields the almost sure exponential stability as desired. We should of course point out that [27,Theorem 5.9] shows this implication for hybrid stochastic systems, but our controlled system (8) is, in fact, a hybrid stochastic delay system. In the area of hybrid stochastic delay systems, Mao et al. [27,Theorem 7.24] showed this implication for p 1, but here we have p ∈ (0, 1). Let z be a positive integer sufficiently large for Set ε = τ /z. Let integers k 0 and 0 l z − 1 be arbitrary. For t ∈ [t k + lε, t k + (l + 1)ε], it follows from system (8) that By the Burkholder-Davis-Gandy inequality (see [25]) and inequality (38), we then derive that where C 3 = C 2 |x 0 | p (1 + c p ε p/2ρp ) and c p is the constant from the Burkholder-Davis-Gandy inequality. By (39), we hence have Consequently, for all k = 0, 1, 2, . . . , where C 4 = 2zC 3 . This implies P sup for all i 1. The Borel-Cantelli lemma shows that, with probability 1, sup t k t t k+1 |x(t)| p < e −0.5λt k holds for all but finitely many k. That is, for almost all ω ∈ Ω, there is an integer Therefore, for t k t t k+1 and k k 0 , .
Letting t → ∞, we obtain lim sup for almost all ω ∈ Ω. This completes the proof.

Design of linear feedback controls
Consider the following unstable hybrid stochastic systeṁ with the initial state x(0) = x 0 ∈ R n and mode r(0) = r 0 ∈ S, where f : R n × S → R n . As before, we assume that f meets conditions (9) and (11), namely there are constants K 1 > 0 and for x, y ∈ R n and i ∈ S. Instead of nonlinear feedback controls, we are now looking for linear feedback controls. To avoid the notation becoming complicated, we set B(t) be a scalar Brownian motion in this section (and leave the multi-dimensional case to the reader). The linear feedback control function we look for is of the form u(x, i) = A(i)x so the controlled system becomes where A(i) ∈ R n×n for i ∈ S and we will often write A(i) = A i . Noting that we see that the second inequality in (9) holds with K 2 = max i∈S A i .

Observable in all modes
We first consider the case where the state x(t) is observable in every mode i ∈ S. For each i ∈ S, choose a matrix D i ∈ R n×n such that Choose a nonnegative number δ i such that for all x ∈ R n , we see that the 2nd and 3rd inequality in (11) hold with ρ i = δ i and σ i = 3/4δ i . By (45), We can therefore find a p ∈ (0, 1) sufficiently small for Consequently, recalling (12), we have where 1 = (1, . . . , 1) T ∈ R N . By Lemma A1, A(p) is a nonsingular M -matrix. In other words, we have verified Assumption 3 under condition (45). In summary, we can conclude by using Theorem 1 that if we choose D i and δ i to meet conditions (44) and (45), respectively, and let A i = δ i D i for each i ∈ S, there is a positive scalar τ * such that the stochastic controlled hybrid system (43) is almost surely exponentially stable provided that τ τ * .

Observable in some modes
Let us now consider the case where the state of the underlying system is observable only for some modes but not all. To describe this situation, let us divide the space S of the Markov chain into two proper subspaces S 1 and S 2 (namely S = S 1 ∪ S 2 and S 1 ∩ S 2 = ∅). Assume that the state x(t) is not observable when the system is in any mode i ∈ S 1 , but is fully observable in any mode i ∈ S 2 . Without any loss of generality, let us assume that S 1 = {1, . . . ,N } and S 2 = {N + 1, . . . , N } for some 1 N < N .
Let us now design our stochastic feedback control function u(x, i). Given that the system is not observable in any mode i ∈ S 1 , it is reasonable to think it is not controllable in these modes so we must have u(x, i) = 0 for i ∈ S 1 . For i ∈ S 2 , we seek the linear control function u(x, i) = A i x as in the last subsection. Hence, the controlled system can still be described by the hybrid system (43) as long as we set A i = 0 for i ∈ S 1 .
To design A i for i ∈ S 2 , we impose an additional condition that for each i ∈ S 1 , there is a j ∈ S 2 such that γ ij > 0.
In layman's terms, this condition means that given that the Markov chain is at state i ∈ S 1 at any time t, it could jump to a state j ∈ S 2 directly in very short time with a positive probability. Based on this condition, we can choose a pair of numbers p ∈ (0, 2/3) and β ∈ (0, 1) such that We can then, for each i ∈ S 2 , find a nonnegative number δ i such that Choose a matrix D i satisfying condition (44) and let A i = δ i D i . We therefore see that the 2nd and 3rd inequality in (11) hold with ρ i = δ i and σ i = 3/4δ i for i ∈ S 2 whereas ρ i = σ i = 0 for i ∈ S 1 . Define Then, for i ∈ S 1 , γ ij > 0 by (48), whereas for i ∈ S 2 , by (49). By Lemma A1, A(p) is a nonsingular M -matrix. In other words, we have to design A i to meet Assumption 3 under condition (47). We can therefore conclude by using Theorem 1 that if we design A i as described above, there is a positive scalar τ * such that the stochastic controlled hybrid system (43) is almost surely exponentially stable provided that τ τ * .

Conclusion
Influenced by Mao [34], we have discussed the almost sure stabilization of a given unstable hybrid differential equationẋ(t) = f (x(t), r(t)) by nonlinear discrete-time stochastic feedback control u(x([t/τ ]τ ), r([t/τ ]τ ))dB(t). We have shown that there is a positive number τ * such that the stochastically controlled system dx(t) = f (x(t), r(t))dt + u(x([t/τ ]τ ), r([t/τ ]τ ))dB(t) is almost surely stable provided that τ < τ * under the global Lipschitz condition plus the condition that guarantees the almost sure exponential stability of the corresponding hybrid stochastic system dx(t) = f (x(t), r(t))dt + u(x(t), r(t))dB(t). As a special but important case, we have discussed in more detail how to design the linear feedback controls.