Abstract
In many practical applications, stability with respect to part of the system’s states is often necessary with finite-time convergence to the equilibrium state of interest. Finite-time partial stability involves dynamical systems whose part of the trajectory converges to an equilibrium state in finite time. In this paper, we address finite-time partial stability in probability and uniform finite-time partial stability in probability for nonlinear stochastic dynamical systems. Specifically, we provide Lyapunov conditions involving a Lyapunov function that is positive definite and decrescent with respect to part of the system state and satisfies a differential inequality involving fractional powers for guaranteeing finite-time partial stability in probability. In addition, we show that finite-time partial stability in probability leads to uniqueness of solutions in forward time and we establish necessary and sufficient conditions for almost sure continuity of the settling-time operator of the nonlinear stochastic dynamical system. Finally, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control design for finite-time partial stochastic stability and finite-time, partial-state stochastic stabilization. Finite-time partial stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state and can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation guaranteeing both finite-time, partial-state stability and optimality. The overall framework provides the foundation for extending stochastic optimal linear–quadratic controller synthesis to nonlinear–nonquadratic optimal finite-time, partial-state stochastic stabilization.
Similar content being viewed by others
References
Agarwal R, Lakshmikantham V (1993) Uniqueness and nonuniqueness criteria for ordinary differential equations. World Scientific, Singapore
Yamada T, Watanabe S (1971) On the uniqueness of solutions of stochastic differential equations. J Math Kyoto Univ 11(1):155–167
Watanabe S, Yamada T (1971) On the uniqueness of solutions of stochastic differential equations II. J Math Kyoto Univ 11(3):553–563
Roxin E (1966) On finite stability in control systems. Rendiconti del Circolo Matematico di Palermo 15(3):273–282
Bhat SP, Bernstein DS (2000) Finite-time stability of continuous autonomous systems. SIAM J Control Optim 38(3):751–766
Bhat SP, Bernstein DS (2005) Geometric homogeneity with applications to finite-time stability. Math Control Signals Syst 17(2):101–127
Moulay E, Perruquetti W (2008) Finite time stability conditions for non-autonomous continuous systems. Int J Control 81(5):797–803
Haddad WM, L’Afflitto A (2015) Finite-time partial stability, stabilization, and optimal feedback control. J Franklin Inst 352:2329–2357
Chen W, Jiao LC (2010) Finite-time stability theorem of stochastic nonlinear systems. Automatica 46(12):2105–2108
Yin J, Khoo S, Man Z, Yu X (2011) Finite-time stability and instability of stochastic nonlinear systems. Automatica 47(12):2671–2677
Haddad WM, Chellaboina V (2008) Nonlinear dynamical systems and control: a Lyapunov-based approach. Princeton University Press, Princeton
Sharov V (1978) Stability and stabilization of stochastic systems vis-a-vis some of the variables. Avtomat i Telemekh 11:63–71 (in Russian)
Ignatyev O (2009) Partial asymptotic stability in probability of stochastic differential equations. Stat Prob Lett 79:597–601
Liu D, Wang W, Ignatyev O, Zhang W (2012) Partial stochastic asymptotic stability of neutral stochastic functional differential equations with Markovian switching by boundary condition. Adv Diff Equ 220:1–8
Haimo V (1986) Finite time controllers. SIAM J Control Optim 24(4):760–770
Bhat SP, Bernstein DS (1998) Continuous finite-time stabilization of the translational and rotational double integrators. IEEE Trans Autom Control 43(5):678–682
Hong Y (2002) Finite-time stabilization and stabilizability of a class of controllable systems. Syst Control Lett 46(4):231–236
Hong Y, Huang J, Xu Y (2001) On an output feedback finite-time stabilization problem. IEEE Trans Autom Control 46(2):305–309
Qian C, Lin W (2001) A continuous feedback approach to global strong stabilization of nonlinear systems. IEEE Trans Autom Control 46(7):1061–1079
Haddad WM, L’Afflitto A (2016) Finite-time stabilization and optimal feedback control. IEEE Trans Autom Control 61:1069–1074
Moulay E, Perruquetti W (2006) Finite time stability and stabilization of a class of continuous systems. J Math Anal Appl 323(2):1430–1443
Vorotnikov VI (1998) Partial stability and control. Birkhäuser, Boston
L’Afflitto A, Haddad WM, Bakolas E (2016) Partial-state stabilization and optimal feedback control. Int J Robust Nonlin Control 26:1026–1050
Freeman R, Kokotovic P (1996) Inverse optimality in robust stabilization. SIAM J Control Optim 34(4):1365–1391
Deng H, Krstic M (1997) Stochastic nonlinear stabilization—part II: inverse optimality. Syst Control Lett 32:151–159
Khasminskii RZ (2012) Stochastic stability of differential equations. Springer, Berlin
Arnold L (1974) Stochastic differential equations: theory and applications. Wiley, New York
Øksendal B (1995) Stochastic differential equations: an introduction with applications. Springer, Berlin
Dorato P (2006) An overview of finite-time stability. In: Menini L, Zaccarian L, Abdallah CT (eds) Current trends in nonlinear systems and control. Birkhauser, Boston, pp 185–194
Meyn SP, Tweedie RL (1993) Markov chains and stochastic stability. Springer, London
Apostol TM (1957) Mathematical analysis. Addison-Wesley, Reading
Arapostathis A, Borkar VS, Ghosh MK (2012) Ergodic control of diffusion processes. Cambridge University Press, Cambridge
Curtis HD (2014) Orbital mechanics for engineering students. Elsevier, Oxford
Junkins J, Schaub H (2009) Analytical mechanics of space systems. AIAA Education Series, Reston
Salehi SV, Ryan E (1982) On optimal nonlinear feedback regulation of linear plants. IEEE Trans Autom Control 27(6):1260–1264
Crandall MG, Evans LC, Lions PL (1984) Some properties of viscosity solutions of Hamilton–Jacobi equations. Trans Am Math Soc 282(2):487–502
Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR (1998) Nonsmooth analysis and control theory. Springer, New York
Folland GB (1999) Real analysis: modern techniques and their applications. Wiley, New York
Mao X (1999) Stochastic versions of the Lasalle theorem. J Diff Equ 153:175–195
Chen W, Jiao LC (2010) Finite-time stability theorem of stochastic nonlinear systems. Automatica 46:2105–2108
Apostol TM (1974) Mathematical analysis. Addison-Wesley, Reading
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported in part by the Air Force Office of Scientific Research under Grant FA9550-16-1-0100.
Appendices
Appendix 1
Proof of Proposition 3.2
(i) It follows from Definition 2.2 that
for all \(( x_1(0), x_2(0)) \in {\mathcal {H}}^{{{\mathbb {R}}}^{n_1} \setminus \{ 0 \}}_{n_1} \times {\mathcal {H}}_{n_2}\). Hence, \(T(s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0))) = {\mathrm{inf}}\{ t_2 \in {\mathbb {R}}_+ {:}\, s_1(t_2, s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0))) = 0 \} \). Now, for \(0 \le t_1 \le T( x_1(0),\) \(x_2(0))\), the semigroup property and (96) imply that
Alternatively, for \(0 \le T( x_1(0), x_2(0)) \le t_1\), \(T(s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0)))\) \(\mathop {=}\limits ^{\text {a.s.}} 0\), which proves (11).
(ii) Necessity is immediate. To prove sufficiency, suppose that \(T(\cdot , \cdot )\) is jointly sample continuous at \((0, x_2)\), \(x_2 \in {\mathcal {H}}_{n_2}\). Let \((x_1, x_2) \in {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\) and consider the sequences \(\{ x_{1n} \}_{n = 1}^{\infty } \in {\mathcal {H}}_{n_1}\) converging pointwise to \(x_1\) and \(\{ x_{2n} \}_{n = 1}^{\infty } \in {\mathcal {H}}_{n_2}\) converging pointwise to \(x_2\). Let \(\tau ^- = \liminf _{n \rightarrow \infty } T(x_{1n}, x_{2n})\) and \(\tau ^+ = \limsup _{n \rightarrow \infty } T(x_{1n}, x_{2n})\) be pointwise limits. Note that \(\tau ^-\), \(\tau ^+ \in {\mathcal {H}}^{{\mathbb {R}}_+}_1\) and \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\).
Next, let \(\{ x_{1n_m} \}_{m = 0}^{\infty } \in {\mathcal {H}}_{n_1}\) be a subsequence of \(\{ x_{1n} \}\) and \(\{ x_{2n_m} \}_{m = 0}^{\infty } \in {\mathcal {H}}_{n_2}\) be a subsequence of \(\{ x_{2n} \}\) such that \(T(x_{1n_m}, x_{2n_m}) \mathop {\rightarrow }\limits ^{\text {a.s.}}\tau ^+\) as \(m \rightarrow \infty \). The sequence \(\{ (T(x_1, x_2), x_{1n_m},\) \( x_{2n_m} ) \}_{m = 1}^{\infty }\) converges in \({\mathcal {H}}^{\overline{{\mathbb {R}}}_+}_1 \times {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\) to \((T(x_1, x_2), x_1, x_2 )\) almost surely as \(m \rightarrow \infty \). Since \(s_1(T(x_1, x_2) + t_1, x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} 0\) for all \(t_1 \ge 0\) and since all solutions to (1) and (2) are sample continuous in their initial conditions [27, Thm. 7.3.1], it follows that \(s_1(T(x_1, x_2), x_{1n_m}, x_{2n_m}) \mathop {\rightarrow }\limits ^{\text {a.s.}} s_1(T(x_1, x_2), x_1, x_2)\mathop {=}\limits ^{\text {a.s.}} 0\) as \(m \rightarrow \infty \). Thus, since \(T(0, x_2)\) is sample continuous for all \(x_2 \in {\mathcal {H}}_{n_2}\), it follows that
Now, with \(t_1 = T(x_1, x_2)\), \(x_1(0) = x_{1n_m}\), and \(x_2(0) = x_{2n_m}\), it follows from (11) and (97) that \(T(s_1(T(x_1, x_2), x_{1n_m}, x_{2n_m}), s_2(T(x_1, x_2), x_{1n_m}, x_{2n_m})) \mathop {=}\limits ^{\text {a.s.}} \max \, \{ T(x_{1n_m}, x_{2n_m})- T(x_1, x_2), 0 \} \) and \(\max \, \{ T(x_{1n_m}, x_{2n_m})- T(x_1, x_2) , 0 \} \mathop {\rightarrow }\limits ^{\text {a.s.}} 0\) as \(m \rightarrow \infty \). Thus, \(\max \, \{ \tau ^+ - T(x_1, x_2) , 0 \}\mathop {=}\limits ^{\text {a.s.}} 0\), which implies that \( \tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\).
Finally, let \(\{ x_{1n_k} \}_{k = 0}^{\infty } \in {\mathcal {H}}_{n_1}\) be a subsequence of \(\{ x_{1n} \}\) and \(\{ x_{2n_k} \}_{k = 0}^{\infty } \in {\mathcal {H}}_{n_2}\) be a subsequence of \(\{ x_{2n} \}\) such that \(T(x_{1n_k}, x_{2n_k}) \mathop {\rightarrow }\limits ^{\text {a.s.}} \tau ^-\) as \(k \rightarrow \infty \). It follows from \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\) and \(\tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\) that \(\tau ^- \in {\mathcal {H}}^{{\mathbb {R}}_+}_1\), and hence, the sequence \(\{ (T(x_{1n_k}, x_{2n_k}), x_{1n_k}, x_{2n_k} ) \}_{k = 1}^{\infty }\) converges pointwise to \((\tau ^-, x_1, x_2 )\) as \(k \rightarrow \infty \). Since \(s_1(\cdot , \cdot , \cdot )\) is jointly sample continuous, it follows that \(s_1(T(x_{1n_k},\) \( x_{2n_k}), x_{1n_k}, x_{2n_k} )\mathop {\rightarrow }\limits ^{\text {a.s.}} s_1(\tau ^-, x_1, x_2 )\) as \(k \rightarrow \infty \). Now, since \(s_1(T(x_1, x_2) + t_1, x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} 0\) for all \(t_1 \ge 0\), \(s_1(T(x_{1n_k}, x_{2n_k}),\) \( x_{1n_k}, x_{2n_k} ) \mathop {=}\limits ^{\text {a.s.}} 0\) for each k. Hence, \(s_1(\tau ^-, x_1, x_2 ) \mathop {=}\limits ^{\text {a.s.}} 0\) and, by the definition of the settling-time operator, \(T(x_1, x_2) \mathop {\le }\limits ^{\text {a.s.}} \tau ^-\). Now, it follows from \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\), \(\tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\), and \(T(x_1, x_2) \mathop {\le }\limits ^{\text {a.s.}} \tau ^-\) that \(\tau ^- \mathop {=}\limits ^{\text {a.s.}} T(x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} \tau ^+\), and hence, \(T(x_{1n}, x_{2n}) \mathop {\rightarrow }\limits ^{\text {a.s.}} T(x_1, x_2)\) as \(n \rightarrow \infty \), which proves that \(T(\cdot , \cdot )\) is jointly sample continuous on \({\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\). \(\square \)
Appendix 2
Proof of Theorem 3.1
(i) Let \(x_{1} \in {\mathbb {R}}^{n_1}\), \(x_{20} \in {\mathbb {R}}^{n_2}\), \(\varepsilon > 0\), and \(\rho > 0\), and define \({\mathcal {D}}_{\varepsilon ,\rho } \triangleq \{ x_1 \in {\mathcal {B}}_{\varepsilon }(0) {:}\, V(x_1, x_{20}) < \alpha (\varepsilon )\rho \}\). Since \(V(\cdot , \cdot )\) is continuous and \(V(0, x_2) = 0\), it follows that \({\mathcal {D}}_{\varepsilon ,\rho }\) is nonempty and there exists \(\delta =\delta (\varepsilon ,\rho , x_{20}) > 0\) such that \(V(x_1, x_{20}) < \alpha (\varepsilon )\rho \), \(x_1 \in {\mathcal {B}}_{\delta }(0)\). Hence, \({\mathcal {B}}_{\delta }(0) \subseteq {\mathcal {D}}_{\varepsilon ,\rho }\). Next, it follows from (14) that \(V(x_1(t), x_2(t))\) is a (positive) supermartingale [26, Lemma 5.4], and hence, for every \(x_1(0) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1} \subseteq {\mathcal {H}}^{{\mathcal {D}}_{\rho }}_{n_1}\), it follows from (13), with \(\alpha (\cdot )\in {\mathcal {K}}_\infty \), and the extended version of the Markov inequality for monotonically increasing functions [38, p. 193] that
which proves partial Lyapunov stability in probability with respect to \(x_1\).
To prove global partial asymptotic stability in probability, it follows from (14) and [39, Corollary 4.2] that \(\lim _{t\rightarrow \infty } k(\Vert x_2(t) \Vert ) r(V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0.\) Since \(k(\Vert x_2(t)\Vert )\) is \({\mathcal {F}}_t\)-submartingale, it follows that \(\lim _{t\rightarrow \infty }r(V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0\), which, since \(r{:}\,{{\mathbb {R}}}_+\rightarrow {{\mathbb {R}}}_+\), further implies that \(\lim _{t\rightarrow \infty } V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0\). Now, it follows from (13) that \(\lim _{t\rightarrow \infty }\alpha (\Vert x_1(t) \Vert ) \le \lim _{t\rightarrow \infty } \) \( V(x_1(t),x_2(t)) \mathop {=}\limits ^{\text {a.s.}} 0,\) which implies \({{\mathbb {P}}}^{x_0}\left( \lim _{t\rightarrow \infty }\Vert x_1(t) \Vert =0\right) =1\). Hence, \(\mathcal{G}\) is globally partially asymptotically stable in probability and the stochastic settling-time operator \(T(x_1(0),x_2(0))\le \infty \) almost surely [40].
Next, we show that \(T(x_1(0),x_2(0))\) is finite with probability one and satisfies (17), and hence, \({{\mathbb {E}}}^{x_0} \left[ T(x_1(0),x_2(0))\right] \) \(< \infty \). Define \(T_0\buildrel {\scriptscriptstyle \triangle } \over =T(x_1(0),x_2(0))\), \(x(t)\buildrel {\scriptscriptstyle \triangle } \over =[x_1^{\mathrm {T}}(t),x_2^{\mathrm {T}}(t)]^{\mathrm {T}}\), and \(\alpha (V)\buildrel {\scriptscriptstyle \triangle } \over =\int _{0}^{V}\frac{{\mathrm{d}}v}{r(v)}\), \(V\in \overline{{{\mathbb {R}}}}_+\). Now, using It\({\hat{\mathrm{o}}}\)’s (chain rule) formula the stochastic differential of V(x(t)) along the system trajectories x(t), \(t\ge 0\), is given by
Next, using (14) it follows that
Once again, using Ito’s (chain rule) formula it follows that
Hence, it follows from (98) and (16) that
Taking the expectation on both sides of (100) and using the fact that \(x(0)\mathop {=}\limits ^{\text {a.s.}} x_0\) and \(x(T_0)\mathop {=}\limits ^{\text {a.s.}} 0\) yields
Next, since \(q {:}\, [0, \infty ) \rightarrow {\mathbb {R}}\) is continuously differentiable and satisfies (18), and, by assumption, the process \(k(\Vert x_2(t))\Vert )\) is a positive \({\mathcal {F}}_t\)-submartingale, it follows that \(q(\cdot )\) is convex, monotonically increasing, and invertible. Hence, applying Jensen’s inequality [38, p. 109], Fubini’s theorem [41, p. 410], and the law of iterated expectation on the random variable \(q(T(x_1(0),x_2(0)))\) yields
which shows that \(T(x_1(0),x_2(0))\) is finite with probability one. Moreover, it follows from the stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) and Proposition 3.1 that \(T(\cdot , \cdot )\) can be extended to \({\mathcal {H}}_1^{\overline{{\mathbb {R}}}_+}\) and \(T(0, x_{20}) \mathop {=}\limits ^{\text {a.s.}} 0\).
(ii) Let \(\rho > 0\), \(x_{10} \in {\mathbb {R}}^{n_1}\), and \(x_{20} \in {\mathbb {R}}^{n_2}\). Since \(\alpha (\cdot )\) and \(\beta (\cdot )\) are class \({\mathcal {K}}_{\infty }\) functions, it follows that, for every \(\varepsilon > 0\), there exists \(\delta =\delta (\varepsilon ,\rho ) > 0\) such that \(\beta (\delta ) \le \alpha (\varepsilon )\rho \). Now, (14) implies that \(V(x_1(t), x_2(t))\) is a (positive) supermartingale, and hence, it follows from (13) and (19) that, for all \((x_1(0), x_2(0)) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1} \times {\mathcal {H}}_{n_2}\),
Hence, for every \(x_1(0) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1}\), \(x_1(t) \in {\mathcal {H}}^{{\mathcal {B}}_{\varepsilon }(0)}_{n_1}\), \(t \ge 0\), which proves uniform Lyapunov stability in probability with respect to \(x_1\). Stochastic finite-time partial convergence follows as in the proof of (i), implying global stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) uniformly in \(x_{20}\). In addition, the existence of a stochastic settling-time operator \(T {:}\, {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2} \rightarrow {\mathcal {H}}_1^{[0, \infty )}\) that verifies (17) follows as in the proof of (i).
(iii) Global uniform stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) directly follows from (ii). Now, using similar arguments as in the proof of (i), \(q^{-1}(\cdot )=\frac{1}{k}(\cdot )\) directly follows from (18) with \(k(\Vert x_2 \Vert ) = k \in {\mathbb {R}}_+\), \(x_2 \in {\mathbb {R}}^{n_2}\). Now, the existence a stochastic settling-time operator \(T {:}\, {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2} \rightarrow {\mathcal {H}}_1^{ [0, \infty )}\) such that (20) holds follows as in the proof of (i) and (17). Since \({{\mathbb {E}}}^{x_0}[T(x_1(0),x_2(0)])]\) is not a function of \(x(t), t\ge 0\), strong stochastic finite-time convergence of \({\mathcal {G}}\) with respect to \(x_1\) uniformly in \(x_{20}\) is immediate. Hence, the nonlinear stochastic dynamical system \({\mathcal {G}}\) is globally strongly stochastic finite-time stable with respect to \(x_1\) uniformly in \(x_{20}\). \(\square \)
Rights and permissions
About this article
Cite this article
Rajpurohit, T., Haddad, W.M. Stochastic finite-time partial stability, partial-state stabilization, and finite-time optimal feedback control. Math. Control Signals Syst. 29, 10 (2017). https://doi.org/10.1007/s00498-017-0194-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00498-017-0194-9