Skip to main content
Log in

Stochastic finite-time partial stability, partial-state stabilization, and finite-time optimal feedback control

  • Original Article
  • Published:
Mathematics of Control, Signals, and Systems Aims and scope Submit manuscript

Abstract

In many practical applications, stability with respect to part of the system’s states is often necessary with finite-time convergence to the equilibrium state of interest. Finite-time partial stability involves dynamical systems whose part of the trajectory converges to an equilibrium state in finite time. In this paper, we address finite-time partial stability in probability and uniform finite-time partial stability in probability for nonlinear stochastic dynamical systems. Specifically, we provide Lyapunov conditions involving a Lyapunov function that is positive definite and decrescent with respect to part of the system state and satisfies a differential inequality involving fractional powers for guaranteeing finite-time partial stability in probability. In addition, we show that finite-time partial stability in probability leads to uniqueness of solutions in forward time and we establish necessary and sufficient conditions for almost sure continuity of the settling-time operator of the nonlinear stochastic dynamical system. Finally, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control design for finite-time partial stochastic stability and finite-time, partial-state stochastic stabilization. Finite-time partial stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state and can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation guaranteeing both finite-time, partial-state stability and optimality. The overall framework provides the foundation for extending stochastic optimal linear–quadratic controller synthesis to nonlinear–nonquadratic optimal finite-time, partial-state stochastic stabilization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Agarwal R, Lakshmikantham V (1993) Uniqueness and nonuniqueness criteria for ordinary differential equations. World Scientific, Singapore

    Book  MATH  Google Scholar 

  2. Yamada T, Watanabe S (1971) On the uniqueness of solutions of stochastic differential equations. J Math Kyoto Univ 11(1):155–167

    Article  MathSciNet  MATH  Google Scholar 

  3. Watanabe S, Yamada T (1971) On the uniqueness of solutions of stochastic differential equations II. J Math Kyoto Univ 11(3):553–563

    Article  MathSciNet  MATH  Google Scholar 

  4. Roxin E (1966) On finite stability in control systems. Rendiconti del Circolo Matematico di Palermo 15(3):273–282

    Article  MathSciNet  MATH  Google Scholar 

  5. Bhat SP, Bernstein DS (2000) Finite-time stability of continuous autonomous systems. SIAM J Control Optim 38(3):751–766

    Article  MathSciNet  MATH  Google Scholar 

  6. Bhat SP, Bernstein DS (2005) Geometric homogeneity with applications to finite-time stability. Math Control Signals Syst 17(2):101–127

    Article  MathSciNet  MATH  Google Scholar 

  7. Moulay E, Perruquetti W (2008) Finite time stability conditions for non-autonomous continuous systems. Int J Control 81(5):797–803

    Article  MathSciNet  MATH  Google Scholar 

  8. Haddad WM, L’Afflitto A (2015) Finite-time partial stability, stabilization, and optimal feedback control. J Franklin Inst 352:2329–2357

    Article  MathSciNet  Google Scholar 

  9. Chen W, Jiao LC (2010) Finite-time stability theorem of stochastic nonlinear systems. Automatica 46(12):2105–2108

    Article  MathSciNet  MATH  Google Scholar 

  10. Yin J, Khoo S, Man Z, Yu X (2011) Finite-time stability and instability of stochastic nonlinear systems. Automatica 47(12):2671–2677

    Article  MathSciNet  MATH  Google Scholar 

  11. Haddad WM, Chellaboina V (2008) Nonlinear dynamical systems and control: a Lyapunov-based approach. Princeton University Press, Princeton

    MATH  Google Scholar 

  12. Sharov V (1978) Stability and stabilization of stochastic systems vis-a-vis some of the variables. Avtomat i Telemekh 11:63–71 (in Russian)

    MathSciNet  Google Scholar 

  13. Ignatyev O (2009) Partial asymptotic stability in probability of stochastic differential equations. Stat Prob Lett 79:597–601

    Article  MathSciNet  MATH  Google Scholar 

  14. Liu D, Wang W, Ignatyev O, Zhang W (2012) Partial stochastic asymptotic stability of neutral stochastic functional differential equations with Markovian switching by boundary condition. Adv Diff Equ 220:1–8

    Article  MathSciNet  Google Scholar 

  15. Haimo V (1986) Finite time controllers. SIAM J Control Optim 24(4):760–770

    Article  MathSciNet  MATH  Google Scholar 

  16. Bhat SP, Bernstein DS (1998) Continuous finite-time stabilization of the translational and rotational double integrators. IEEE Trans Autom Control 43(5):678–682

  17. Hong Y (2002) Finite-time stabilization and stabilizability of a class of controllable systems. Syst Control Lett 46(4):231–236

    Article  MathSciNet  MATH  Google Scholar 

  18. Hong Y, Huang J, Xu Y (2001) On an output feedback finite-time stabilization problem. IEEE Trans Autom Control 46(2):305–309

    Article  MathSciNet  MATH  Google Scholar 

  19. Qian C, Lin W (2001) A continuous feedback approach to global strong stabilization of nonlinear systems. IEEE Trans Autom Control 46(7):1061–1079

    Article  MathSciNet  MATH  Google Scholar 

  20. Haddad WM, L’Afflitto A (2016) Finite-time stabilization and optimal feedback control. IEEE Trans Autom Control 61:1069–1074

    Article  MathSciNet  MATH  Google Scholar 

  21. Moulay E, Perruquetti W (2006) Finite time stability and stabilization of a class of continuous systems. J Math Anal Appl 323(2):1430–1443

    Article  MathSciNet  MATH  Google Scholar 

  22. Vorotnikov VI (1998) Partial stability and control. Birkhäuser, Boston

    MATH  Google Scholar 

  23. L’Afflitto A, Haddad WM, Bakolas E (2016) Partial-state stabilization and optimal feedback control. Int J Robust Nonlin Control 26:1026–1050

    Article  MathSciNet  MATH  Google Scholar 

  24. Freeman R, Kokotovic P (1996) Inverse optimality in robust stabilization. SIAM J Control Optim 34(4):1365–1391

    Article  MathSciNet  MATH  Google Scholar 

  25. Deng H, Krstic M (1997) Stochastic nonlinear stabilization—part II: inverse optimality. Syst Control Lett 32:151–159

    Article  MATH  Google Scholar 

  26. Khasminskii RZ (2012) Stochastic stability of differential equations. Springer, Berlin

    Book  MATH  Google Scholar 

  27. Arnold L (1974) Stochastic differential equations: theory and applications. Wiley, New York

    MATH  Google Scholar 

  28. Øksendal B (1995) Stochastic differential equations: an introduction with applications. Springer, Berlin

    Book  MATH  Google Scholar 

  29. Dorato P (2006) An overview of finite-time stability. In: Menini L, Zaccarian L, Abdallah CT (eds) Current trends in nonlinear systems and control. Birkhauser, Boston, pp 185–194

    Chapter  Google Scholar 

  30. Meyn SP, Tweedie RL (1993) Markov chains and stochastic stability. Springer, London

    Book  MATH  Google Scholar 

  31. Apostol TM (1957) Mathematical analysis. Addison-Wesley, Reading

    MATH  Google Scholar 

  32. Arapostathis A, Borkar VS, Ghosh MK (2012) Ergodic control of diffusion processes. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  33. Curtis HD (2014) Orbital mechanics for engineering students. Elsevier, Oxford

    Google Scholar 

  34. Junkins J, Schaub H (2009) Analytical mechanics of space systems. AIAA Education Series, Reston

    MATH  Google Scholar 

  35. Salehi SV, Ryan E (1982) On optimal nonlinear feedback regulation of linear plants. IEEE Trans Autom Control 27(6):1260–1264

    Article  MATH  Google Scholar 

  36. Crandall MG, Evans LC, Lions PL (1984) Some properties of viscosity solutions of Hamilton–Jacobi equations. Trans Am Math Soc 282(2):487–502

    Article  MathSciNet  MATH  Google Scholar 

  37. Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR (1998) Nonsmooth analysis and control theory. Springer, New York

    MATH  Google Scholar 

  38. Folland GB (1999) Real analysis: modern techniques and their applications. Wiley, New York

    MATH  Google Scholar 

  39. Mao X (1999) Stochastic versions of the Lasalle theorem. J Diff Equ 153:175–195

    Article  MathSciNet  MATH  Google Scholar 

  40. Chen W, Jiao LC (2010) Finite-time stability theorem of stochastic nonlinear systems. Automatica 46:2105–2108

    Article  MathSciNet  MATH  Google Scholar 

  41. Apostol TM (1974) Mathematical analysis. Addison-Wesley, Reading

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wassim M. Haddad.

Additional information

This work was supported in part by the Air Force Office of Scientific Research under Grant FA9550-16-1-0100.

Appendices

Appendix 1

Proof of Proposition 3.2

(i) It follows from Definition 2.2 that

$$\begin{aligned} T( x_1(0), x_2(0)) = {\mathrm{inf}}\{ t \in {\mathbb {R}}_+ {:}\, s_1(t, x_1(0), x_2(0)) = 0 \} \end{aligned}$$
(96)

for all \(( x_1(0), x_2(0)) \in {\mathcal {H}}^{{{\mathbb {R}}}^{n_1} \setminus \{ 0 \}}_{n_1} \times {\mathcal {H}}_{n_2}\). Hence, \(T(s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0))) = {\mathrm{inf}}\{ t_2 \in {\mathbb {R}}_+ {:}\, s_1(t_2, s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0))) = 0 \} \). Now, for \(0 \le t_1 \le T( x_1(0),\) \(x_2(0))\), the semigroup property and (96) imply that

$$\begin{aligned}&T(s_1(t_1, x_1(0), x_2(0)) , s_2(t_1, x_1(0), x_2(0))) \\&\quad = {\mathrm{inf}}\{ t_2 \in {\mathbb {R}}_+ {:}\, s_1(t_2, s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0))) = 0 \} \\&\quad \mathop {=}\limits ^{\text {a.s.}}{\mathrm{inf}}\{ t_2 \in {\mathbb {R}}_+ {:}\, s_1(t_1+t_2, x_1(0), x_2(0)) = 0 \} \\&\quad \mathop {=}\limits ^{\text {a.s.}} T( x_1(0), x_2(0)) -t_1. \end{aligned}$$

Alternatively, for \(0 \le T( x_1(0), x_2(0)) \le t_1\), \(T(s_1(t_1, x_1(0), x_2(0)), s_2(t_1, x_1(0), x_2(0)))\) \(\mathop {=}\limits ^{\text {a.s.}} 0\), which proves (11).

(ii) Necessity is immediate. To prove sufficiency, suppose that \(T(\cdot , \cdot )\) is jointly sample continuous at \((0, x_2)\), \(x_2 \in {\mathcal {H}}_{n_2}\). Let \((x_1, x_2) \in {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\) and consider the sequences \(\{ x_{1n} \}_{n = 1}^{\infty } \in {\mathcal {H}}_{n_1}\) converging pointwise to \(x_1\) and \(\{ x_{2n} \}_{n = 1}^{\infty } \in {\mathcal {H}}_{n_2}\) converging pointwise to \(x_2\). Let \(\tau ^- = \liminf _{n \rightarrow \infty } T(x_{1n}, x_{2n})\) and \(\tau ^+ = \limsup _{n \rightarrow \infty } T(x_{1n}, x_{2n})\) be pointwise limits. Note that \(\tau ^-\), \(\tau ^+ \in {\mathcal {H}}^{{\mathbb {R}}_+}_1\) and \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\).

Next, let \(\{ x_{1n_m} \}_{m = 0}^{\infty } \in {\mathcal {H}}_{n_1}\) be a subsequence of \(\{ x_{1n} \}\) and \(\{ x_{2n_m} \}_{m = 0}^{\infty } \in {\mathcal {H}}_{n_2}\) be a subsequence of \(\{ x_{2n} \}\) such that \(T(x_{1n_m}, x_{2n_m}) \mathop {\rightarrow }\limits ^{\text {a.s.}}\tau ^+\) as \(m \rightarrow \infty \). The sequence \(\{ (T(x_1, x_2), x_{1n_m},\) \( x_{2n_m} ) \}_{m = 1}^{\infty }\) converges in \({\mathcal {H}}^{\overline{{\mathbb {R}}}_+}_1 \times {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\) to \((T(x_1, x_2), x_1, x_2 )\) almost surely as \(m \rightarrow \infty \). Since \(s_1(T(x_1, x_2) + t_1, x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} 0\) for all \(t_1 \ge 0\) and since all solutions to (1) and (2) are sample continuous in their initial conditions [27, Thm. 7.3.1], it follows that \(s_1(T(x_1, x_2), x_{1n_m}, x_{2n_m}) \mathop {\rightarrow }\limits ^{\text {a.s.}} s_1(T(x_1, x_2), x_1, x_2)\mathop {=}\limits ^{\text {a.s.}} 0\) as \(m \rightarrow \infty \). Thus, since \(T(0, x_2)\) is sample continuous for all \(x_2 \in {\mathcal {H}}_{n_2}\), it follows that

$$\begin{aligned} T(s_1(T(x_1, x_2), x_{1n_m}, x_{2n_m}), s_2(T(x_1, x_2), x_{1n_m}, x_{2n_m}))\mathop {\rightarrow }\limits ^{\text {a.s.}} T(0, s_2(T(x_1, x_2), x_1, x_2))\mathop {=}\limits ^{\text {a.s.}} 0.\nonumber \\ \end{aligned}$$
(97)

Now, with \(t_1 = T(x_1, x_2)\), \(x_1(0) = x_{1n_m}\), and \(x_2(0) = x_{2n_m}\), it follows from (11) and (97) that \(T(s_1(T(x_1, x_2), x_{1n_m}, x_{2n_m}), s_2(T(x_1, x_2), x_{1n_m}, x_{2n_m})) \mathop {=}\limits ^{\text {a.s.}} \max \, \{ T(x_{1n_m}, x_{2n_m})- T(x_1, x_2), 0 \} \) and \(\max \, \{ T(x_{1n_m}, x_{2n_m})- T(x_1, x_2) , 0 \} \mathop {\rightarrow }\limits ^{\text {a.s.}} 0\) as \(m \rightarrow \infty \). Thus, \(\max \, \{ \tau ^+ - T(x_1, x_2) , 0 \}\mathop {=}\limits ^{\text {a.s.}} 0\), which implies that \( \tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\).

Finally, let \(\{ x_{1n_k} \}_{k = 0}^{\infty } \in {\mathcal {H}}_{n_1}\) be a subsequence of \(\{ x_{1n} \}\) and \(\{ x_{2n_k} \}_{k = 0}^{\infty } \in {\mathcal {H}}_{n_2}\) be a subsequence of \(\{ x_{2n} \}\) such that \(T(x_{1n_k}, x_{2n_k}) \mathop {\rightarrow }\limits ^{\text {a.s.}} \tau ^-\) as \(k \rightarrow \infty \). It follows from \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\) and \(\tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\) that \(\tau ^- \in {\mathcal {H}}^{{\mathbb {R}}_+}_1\), and hence, the sequence \(\{ (T(x_{1n_k}, x_{2n_k}), x_{1n_k}, x_{2n_k} ) \}_{k = 1}^{\infty }\) converges pointwise to \((\tau ^-, x_1, x_2 )\) as \(k \rightarrow \infty \). Since \(s_1(\cdot , \cdot , \cdot )\) is jointly sample continuous, it follows that \(s_1(T(x_{1n_k},\) \( x_{2n_k}), x_{1n_k}, x_{2n_k} )\mathop {\rightarrow }\limits ^{\text {a.s.}} s_1(\tau ^-, x_1, x_2 )\) as \(k \rightarrow \infty \). Now, since \(s_1(T(x_1, x_2) + t_1, x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} 0\) for all \(t_1 \ge 0\), \(s_1(T(x_{1n_k}, x_{2n_k}),\) \( x_{1n_k}, x_{2n_k} ) \mathop {=}\limits ^{\text {a.s.}} 0\) for each k. Hence, \(s_1(\tau ^-, x_1, x_2 ) \mathop {=}\limits ^{\text {a.s.}} 0\) and, by the definition of the settling-time operator, \(T(x_1, x_2) \mathop {\le }\limits ^{\text {a.s.}} \tau ^-\). Now, it follows from \(\tau ^- \mathop {\le }\limits ^{\text {a.s.}} \tau ^+\), \(\tau ^+ \mathop {\le }\limits ^{\text {a.s.}} T(x_1, x_2)\), and \(T(x_1, x_2) \mathop {\le }\limits ^{\text {a.s.}} \tau ^-\) that \(\tau ^- \mathop {=}\limits ^{\text {a.s.}} T(x_1, x_2) \mathop {=}\limits ^{\text {a.s.}} \tau ^+\), and hence, \(T(x_{1n}, x_{2n}) \mathop {\rightarrow }\limits ^{\text {a.s.}} T(x_1, x_2)\) as \(n \rightarrow \infty \), which proves that \(T(\cdot , \cdot )\) is jointly sample continuous on \({\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2}\). \(\square \)

Appendix 2

Proof of Theorem 3.1

(i) Let \(x_{1} \in {\mathbb {R}}^{n_1}\), \(x_{20} \in {\mathbb {R}}^{n_2}\), \(\varepsilon > 0\), and \(\rho > 0\), and define \({\mathcal {D}}_{\varepsilon ,\rho } \triangleq \{ x_1 \in {\mathcal {B}}_{\varepsilon }(0) {:}\, V(x_1, x_{20}) < \alpha (\varepsilon )\rho \}\). Since \(V(\cdot , \cdot )\) is continuous and \(V(0, x_2) = 0\), it follows that \({\mathcal {D}}_{\varepsilon ,\rho }\) is nonempty and there exists \(\delta =\delta (\varepsilon ,\rho , x_{20}) > 0\) such that \(V(x_1, x_{20}) < \alpha (\varepsilon )\rho \), \(x_1 \in {\mathcal {B}}_{\delta }(0)\). Hence, \({\mathcal {B}}_{\delta }(0) \subseteq {\mathcal {D}}_{\varepsilon ,\rho }\). Next, it follows from (14) that \(V(x_1(t), x_2(t))\) is a (positive) supermartingale [26, Lemma 5.4], and hence, for every \(x_1(0) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1} \subseteq {\mathcal {H}}^{{\mathcal {D}}_{\rho }}_{n_1}\), it follows from (13), with \(\alpha (\cdot )\in {\mathcal {K}}_\infty \), and the extended version of the Markov inequality for monotonically increasing functions [38, p. 193] that

$$\begin{aligned} {{\mathbb {P}}}^{x_0}\left( \sup _{t\ge 0}\Vert x_1(t)\Vert \ge \varepsilon \right)&\le \sup _{t\ge 0}\frac{{{\mathbb {E}}}^{x_0}[\alpha (\Vert x_1(t) \Vert ) ]}{\alpha (\varepsilon )} \le \sup _{t\ge 0}\frac{{{\mathbb {E}}}^{x_0}[V(x_1(t),x_2(t))]}{\alpha (\varepsilon )}\\&\le \frac{{{\mathbb {E}}}^{x_0}[V(x_1(0),x_2(0))]}{\alpha (\varepsilon )} \le \rho , \end{aligned}$$

which proves partial Lyapunov stability in probability with respect to \(x_1\).

To prove global partial asymptotic stability in probability, it follows from (14) and [39, Corollary 4.2] that \(\lim _{t\rightarrow \infty } k(\Vert x_2(t) \Vert ) r(V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0.\) Since \(k(\Vert x_2(t)\Vert )\) is \({\mathcal {F}}_t\)-submartingale, it follows that \(\lim _{t\rightarrow \infty }r(V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0\), which, since \(r{:}\,{{\mathbb {R}}}_+\rightarrow {{\mathbb {R}}}_+\), further implies that \(\lim _{t\rightarrow \infty } V(x_1(t), x_2(t))\mathop {=}\limits ^{\text {a.s.}} 0\). Now, it follows from (13) that \(\lim _{t\rightarrow \infty }\alpha (\Vert x_1(t) \Vert ) \le \lim _{t\rightarrow \infty } \) \( V(x_1(t),x_2(t)) \mathop {=}\limits ^{\text {a.s.}} 0,\) which implies \({{\mathbb {P}}}^{x_0}\left( \lim _{t\rightarrow \infty }\Vert x_1(t) \Vert =0\right) =1\). Hence, \(\mathcal{G}\) is globally partially asymptotically stable in probability and the stochastic settling-time operator \(T(x_1(0),x_2(0))\le \infty \) almost surely [40].

Next, we show that \(T(x_1(0),x_2(0))\) is finite with probability one and satisfies (17), and hence, \({{\mathbb {E}}}^{x_0} \left[ T(x_1(0),x_2(0))\right] \) \(< \infty \). Define \(T_0\buildrel {\scriptscriptstyle \triangle } \over =T(x_1(0),x_2(0))\), \(x(t)\buildrel {\scriptscriptstyle \triangle } \over =[x_1^{\mathrm {T}}(t),x_2^{\mathrm {T}}(t)]^{\mathrm {T}}\), and \(\alpha (V)\buildrel {\scriptscriptstyle \triangle } \over =\int _{0}^{V}\frac{{\mathrm{d}}v}{r(v)}\), \(V\in \overline{{{\mathbb {R}}}}_+\). Now, using It\({\hat{\mathrm{o}}}\)’s (chain rule) formula the stochastic differential of V(x(t)) along the system trajectories x(t), \(t\ge 0\), is given by

$$\begin{aligned} {\mathrm{d}}V(x(t)) = {\mathcal {L}}V(x(t)){\mathrm{d}}t + \frac{\partial V}{\partial x}D(x(t)){\mathrm{d}}w(t). \end{aligned}$$

Next, using (14) it follows that

$$\begin{aligned} \int ^{T_0}_{0} k(\Vert x_2(\tau )\Vert ){\mathrm{d}}\tau&= \int ^{T_0}_{0} k(\Vert x_2(\tau )\Vert )\frac{r(V(x(\tau )))}{r(V(x(\tau )))}\mathrm{d}\tau \nonumber \\&\le \int ^{T_0}_{0} -\frac{{\mathcal {L}}V(x(\tau ))}{r(V(x(\tau )))}\mathrm{d}\tau \nonumber \\&\le \int ^{T_0}_{0} -\frac{{\mathrm{d}}V(x(t))}{r(V(x(\tau )))} + \int ^{T_0}_{0} \frac{1}{r(V(x(\tau )))}\frac{\partial V}{\partial x}D(x(\tau )){\mathrm{d}}w(\tau )\nonumber \\&= \int ^{T_0}_{0} -\frac{{\mathrm{d}}\alpha (V))}{{\mathrm{d}}V}{\mathrm{d}}V(x(t)) + \int ^{T_0}_{0}\frac{1}{r(V(x(\tau )))} \frac{\partial V}{\partial x}D(x(\tau )){\mathrm{d}}w(\tau ). \end{aligned}$$
(98)

Once again, using Ito’s (chain rule) formula it follows that

$$\begin{aligned}&{\mathrm{d}}\alpha (V(x(t)))=\left[ \frac{\partial \alpha (V(x))}{\partial x} f(x(t)) + \frac{1}{2}{\mathrm{tr}}\ D^{\mathrm {T}}(x)\frac{\partial ^2 \alpha (V(x))}{\partial x^2}D(x) \right] {\mathrm{d}}t\nonumber \\&\qquad + \frac{\partial \alpha (V(x))}{\partial x}{\mathrm{d}}w(t)\nonumber \\&\quad =\left[ \frac{{\mathrm{d}}\alpha (V))}{{\mathrm{d}}V}\frac{\partial V(x)}{\partial x} f(x(t))+ \frac{1}{2}{\mathrm{tr}}\ D^{\mathrm {T}}(x)\frac{\partial }{\partial x}\left( \frac{{\mathrm{d}}\alpha (V)}{\mathrm{d}V}\frac{\partial V(x)}{\partial x}\right) D(x) \right] {\mathrm{d}}t \nonumber \\&\qquad + \frac{{\mathrm{d}}\alpha (V))}{{\mathrm{d}}V}\frac{\partial V(x)}{\partial x}{\mathrm{d}}w(t)\nonumber \\&\quad = \frac{{\mathrm{d}}\alpha (V))}{{\mathrm{d}}V}\left[ \left( \frac{\partial V(x)}{\partial x} f(x(t))+ \frac{1}{2}{\mathrm{tr}}\ D^{\mathrm {T}}(x)\frac{\partial ^2 (V(x))}{\partial x^2}D(x)\right) {\mathrm{d}}t + \frac{\partial V(x)}{\partial x}{\mathrm{d}}w(t) \right] \nonumber \\&\qquad + \frac{1}{2}{\mathrm{tr}}\ D^{\mathrm {T}}(x)\left( \frac{\partial V(x)}{\partial x}\right) ^{\mathrm {T}}\frac{\mathrm{d^2}\alpha (V)}{{\mathrm{d}}V^2}\left( \frac{\partial V(x)}{\partial x}\right) D(x){\mathrm{d}}t \nonumber \\&\quad = \frac{{\mathrm{d}}\alpha (V))}{{\mathrm{d}}V} {\mathrm{d}}V(x(t)) + \frac{1}{2}{\mathrm{tr}}\ D^{\mathrm {T}}(x)\left( \frac{\partial V(x)}{\partial x}\right) ^{\mathrm {T}}\frac{{\mathrm{d^2}}\alpha (V)}{\mathrm{d}V^2}\left( \frac{\partial V(x)}{\partial x}\right) D(x){\mathrm{d}}t. \end{aligned}$$
(99)

Hence, it follows from (98) and (16) that

$$\begin{aligned}&\int ^{T_0}_{0} k( \Vert x_2(\tau )\Vert ){\mathrm{d}}\tau \le \int ^{T_0}_{0} -{\mathrm{d}}\alpha (V(x(\tau ))) + \int ^{T_0}_{0} \frac{1}{r(V(x(\tau )))}\frac{\partial V}{\partial x}D(x(\tau ))\mathrm{d}w(\tau ) \nonumber \\&\quad \qquad + \int ^{T_0}_{0} \frac{1}{2}{\mathrm{tr}}~ D^{\mathrm {T}}(x)\left( \frac{\partial V(x)}{\partial x}\right) ^{\mathrm {T}}\frac{{\mathrm{d^2}}\alpha (V)}{\mathrm{d}V^2}\left( \frac{\partial V(x)}{\partial x}\right) D(x){\mathrm{d}}\tau \nonumber \\&\qquad = \alpha (V(x(0))) - \alpha (V(x(T_0))) + \int ^{T_0}_{0} \frac{1}{r(V(x(\tau )))}\frac{\partial V}{\partial x}D(x(\tau ))\mathrm{d}w(\tau ) \nonumber \\&\quad \qquad - \int ^{T_0}_{0} \frac{r'(V)}{r^2(V)}\frac{1}{2}{\mathrm{tr}} \left( \frac{\partial V(x)}{\partial x}D^{\mathrm {T}}(x) \right) ^{\mathrm {T}}\left( \frac{\partial V(x)}{\partial x}D(x)\right) {\mathrm{d}}\tau \nonumber \\&\qquad \le \int _{0}^{V(x(0))}\frac{{\mathrm{d}}v}{r(v)} - \int _{0}^{V(x(T_0))}\frac{{\mathrm{d}}v}{r(v)} + \int ^{T_0}_{0}\frac{1}{r(V(x(\tau )))} \frac{\partial V}{\partial x}D(x(\tau )){\mathrm{d}}w(\tau ). \end{aligned}$$
(100)

Taking the expectation on both sides of (100) and using the fact that \(x(0)\mathop {=}\limits ^{\text {a.s.}} x_0\) and \(x(T_0)\mathop {=}\limits ^{\text {a.s.}} 0\) yields

$$\begin{aligned} {{\mathbb {E}}}^{x_0} \left[ \int ^{T_0}_{0} k( \Vert x_2(\tau )\Vert )\mathrm{d}\tau \right] \le \int _{0}^{V(x_0)}\frac{{\mathrm{d}}v}{r(v)}. \end{aligned}$$
(101)

Next, since \(q {:}\, [0, \infty ) \rightarrow {\mathbb {R}}\) is continuously differentiable and satisfies (18), and, by assumption, the process \(k(\Vert x_2(t))\Vert )\) is a positive \({\mathcal {F}}_t\)-submartingale, it follows that \(q(\cdot )\) is convex, monotonically increasing, and invertible. Hence, applying Jensen’s inequality [38, p. 109], Fubini’s theorem [41, p. 410], and the law of iterated expectation on the random variable \(q(T(x_1(0),x_2(0)))\) yields

$$\begin{aligned} {{\mathbb {E}}}^{x_0}\left[ T(x_1(0),x_2(0))\right]= & {} q^{-1}\left( q\left( {{\mathbb {E}}}^{x_0}\left[ T(x_1(0),x_2(0))\right] \right) \right) \nonumber \\\le & {} q^{-1}\left( {{\mathbb {E}}}^{x_0}\left[ q(T(x_1(0),x_2(0)))\right] \right) \nonumber \\= & {} q^{-1}\left( {{\mathbb {E}}}^{x_0} \left[ \int ^{T_0}_{0} {{\mathbb {E}}}^{x_0}\left[ k(\Vert x_2(\tau )\Vert )\right] {{\mathrm{d}}}\tau \right] \right) \nonumber \\= & {} q^{-1}\left( {{\mathbb {E}}}^{x_0} \left[ {{\mathbb {E}}}^{x_0}\left[ \int ^{T_0}_{0} k(\Vert x_2(\tau )\Vert ){\mathrm{d}}\tau |T_0)\right] \right] \right) \nonumber \\= & {} q^{-1}\left( {{\mathbb {E}}}^{x_0} \left[ \int ^{T_0}_{0} k(\Vert x_2(\tau )\Vert ){\mathrm{d}}\tau \right] \right) \nonumber \\\le & {} q^{-1}\left( \int _{0}^{V(x_0)}\frac{{{\mathrm{d}}}v}{r(v)}\right) , \end{aligned}$$
(102)

which shows that \(T(x_1(0),x_2(0))\) is finite with probability one. Moreover, it follows from the stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) and Proposition 3.1 that \(T(\cdot , \cdot )\) can be extended to \({\mathcal {H}}_1^{\overline{{\mathbb {R}}}_+}\) and \(T(0, x_{20}) \mathop {=}\limits ^{\text {a.s.}} 0\).

(ii) Let \(\rho > 0\), \(x_{10} \in {\mathbb {R}}^{n_1}\), and \(x_{20} \in {\mathbb {R}}^{n_2}\). Since \(\alpha (\cdot )\) and \(\beta (\cdot )\) are class \({\mathcal {K}}_{\infty }\) functions, it follows that, for every \(\varepsilon > 0\), there exists \(\delta =\delta (\varepsilon ,\rho ) > 0\) such that \(\beta (\delta ) \le \alpha (\varepsilon )\rho \). Now, (14) implies that \(V(x_1(t), x_2(t))\) is a (positive) supermartingale, and hence, it follows from (13) and (19) that, for all \((x_1(0), x_2(0)) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1} \times {\mathcal {H}}_{n_2}\),

$$\begin{aligned} {{\mathbb {P}}}^{x_0}\left( \sup _{t\ge 0}\Vert x_1(t)\Vert \ge \varepsilon \right)&\le \sup _{t\ge 0}\frac{{{\mathbb {E}}}^{x_0}[\alpha (\Vert x_1(t) \Vert ) ]}{\alpha (\varepsilon )}\le \sup _{t\ge 0}\frac{{{\mathbb {E}}}^{x_0}[V(x_1(t),x_2(t))]}{\alpha (\varepsilon )}\\&\le \frac{{{\mathbb {E}}}^{x_0}[V(x_1(0),x_2(0))]}{\alpha (\varepsilon )} \le \frac{\beta (\delta )}{\alpha (\varepsilon )} \le \rho . \end{aligned}$$

Hence, for every \(x_1(0) \in {\mathcal {H}}^{{\mathcal {B}}_{\delta }(0)}_{n_1}\), \(x_1(t) \in {\mathcal {H}}^{{\mathcal {B}}_{\varepsilon }(0)}_{n_1}\), \(t \ge 0\), which proves uniform Lyapunov stability in probability with respect to \(x_1\). Stochastic finite-time partial convergence follows as in the proof of (i), implying global stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) uniformly in \(x_{20}\). In addition, the existence of a stochastic settling-time operator \(T {:}\, {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2} \rightarrow {\mathcal {H}}_1^{[0, \infty )}\) that verifies (17) follows as in the proof of (i).

(iii) Global uniform stochastic finite-time stability of \({\mathcal {G}}\) with respect to \(x_1\) directly follows from (ii). Now, using similar arguments as in the proof of (i), \(q^{-1}(\cdot )=\frac{1}{k}(\cdot )\) directly follows from (18) with \(k(\Vert x_2 \Vert ) = k \in {\mathbb {R}}_+\), \(x_2 \in {\mathbb {R}}^{n_2}\). Now, the existence a stochastic settling-time operator \(T {:}\, {\mathcal {H}}_{n_1} \times {\mathcal {H}}_{n_2} \rightarrow {\mathcal {H}}_1^{ [0, \infty )}\) such that (20) holds follows as in the proof of (i) and (17). Since \({{\mathbb {E}}}^{x_0}[T(x_1(0),x_2(0)])]\) is not a function of \(x(t), t\ge 0\), strong stochastic finite-time convergence of \({\mathcal {G}}\) with respect to \(x_1\) uniformly in \(x_{20}\) is immediate. Hence, the nonlinear stochastic dynamical system \({\mathcal {G}}\) is globally strongly stochastic finite-time stable with respect to \(x_1\) uniformly in \(x_{20}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rajpurohit, T., Haddad, W.M. Stochastic finite-time partial stability, partial-state stabilization, and finite-time optimal feedback control. Math. Control Signals Syst. 29, 10 (2017). https://doi.org/10.1007/s00498-017-0194-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00498-017-0194-9

Keywords

Navigation