Skip to main content
Log in

Stabilization and Tracking of the Trajectory of a Linear System with Jump Drift

  • STOCHASTIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

For a controlled linear stochastic differential system, we consider the problem of tracking the jumping state of an additive input action determining the current stabilization direction (drift). The tracking objective, which is to stabilize the system near the varying drift, is formalized by a quadratic performance functional. The input action defines a continuous-time Markov chain. The problem is considered in the cases of complete and incomplete information. Dynamic programming is used to solve it in both cases. The solution of the Bellman equation in the first case is obtained based on the properties of a finite-dimensional chain, and in the second case, based on the principle of separation of control and state estimation problems provided by the Wonham filter estimate and the properties of the quadratic performance criterion. A numerical experiment uses an applied model describing the position of a simple mechanical drive. The results of calculations confirming the applicability of the solutions obtained, as well as ways to overcome the difficulties of their numerical implementation, are presented and discussed in detail.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.

Similar content being viewed by others

REFERENCES

  1. Elliott, R.J., Aggoun, L., and Moore, J.B., Hidden Markov Models: Estimation and Control, New York: Springer-Verlag, 1995.

    MATH  Google Scholar 

  2. Bar-Shalom, Y., Willett, P.K., and Tian, X., Tracking and Data Fusion: a Handbook of Algorithms, Storrs, Conn.: YBS Publ., 2011.

    Google Scholar 

  3. Rishel, R., A strong separation principle for stochastic control systems driven by a hidden Markov model, SIAM J. Control Optim., 1994, vol. 32, no. 4, pp. 1008–1020.

    Article  MathSciNet  Google Scholar 

  4. BeneŢ, V., Quadratic approximation by linear systems controlled from partial observations, in Stochastic Analysis, Mayer-Wolf, E., Merzbach, E., and Shwartz, A., Eds., New York: Academic Press, 1991, pp. 39–50.

  5. Helmes, K. and Rishel, R., The solution of a partially observed stochastic optimal control problem in terms of predicted miss, IEEE Trans. Autom. Control, 1992, vol. 37, no. 9, pp. 1462–1464.

    Article  MathSciNet  Google Scholar 

  6. Benes, V., Karatzas, I., Ocone, D., and Wang, H., Control with partial observations and an explicit solution of Mortensen’s equation, Appl. Math. Optim., 2004, no. 49, pp. 217–239. https://doi.org/10.1007/s00245-003-0788-0

  7. Liptser, R.S. and Shiryaev, A.N., Statistics of Random Processes. II. Applications, Berlin: Springer-Verlag, 2001.

    Book  Google Scholar 

  8. Fleming, W.H. and Rishel, R.W., Deterministic and Stochastic Optimal Control, New York: Springer-Verlag, 1975.

    Book  Google Scholar 

  9. Athans, M. and Falb, P.L., Optimal Control: an Introduction to the Theory and Its Applications, New York–Sydney: McGraw-Hill, 1966.

    MATH  Google Scholar 

  10. Bosov, A.V., The problem of controlling the linear output of a nonlinear uncontrollable stochastic differential system by the square criterion, J. Comput. Syst. Sci. Int., 2021, vol. 60, no. 5, pp. 719–739.

    Article  Google Scholar 

  11. Liptser, R.Sh. and Shiryaev, A.N., Teoriya martingalov (Martingale Theory), Moscow: Nauka, 1986.

    MATH  Google Scholar 

  12. Miller, B.M., Miller, G.B., and Semenikhin, K.V., Methods to design optimal control of Markov process with finite state set in the presence of constraints, Autom. Remote Control, 2011, vol. 72, no. 2, pp. 323–341.

    Article  MathSciNet  Google Scholar 

  13. Wonham, W.M., On the separation theorem of stochastic control, SIAM J. Control, 1968, vol. 6, no. 2, pp. 312–326.

    Article  MathSciNet  Google Scholar 

  14. Fel’dbaum, A.A., Osnovy teorii optimal’nykh avtomaticheskikh sistem (Fundamentals of the Theory of Optimal Automatic Systems), Moscow: Nauka, 1966.

    Google Scholar 

  15. Yin, G., Zhang, Q., and Liu, Y., Discrete-time approximation of Wonham filters, J. Control Theory Appl., 2004, no. 2, pp. 1–10.

  16. Borisov, A.V., Numerical schemes for filtering Markov jump processes from discretized observations II: the case of additive noise, Inf. Primen., 2020, vol. 14, no. 1, pp. 17–23.

    Google Scholar 

  17. Borisov, A.V., L1-optimal filtering of Markov jump processes. II. Numerical analysis of particular realizations schemes, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2160–2180.

    Article  MathSciNet  Google Scholar 

  18. Borisov, A. and Sokolov, I., Optimal filtering of Markov jump processes given observations with state-dependent noises: exact solution and stable numerical schemes, Mathematics, 2020, vol. 8, no. 4 (506).

Download references

Funding

The work was supported by project no. 075-15-2020-799 of the Ministry of Science and Higher Education of the Russian Federation. The work was carried out using the infrastructure of the Shared Use Center “High-Performance Computing and Big Data” (SUC “Computer Science” of the Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Bosov.

Additional information

Translated by V. Potapchouck

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bosov, A.V. Stabilization and Tracking of the Trajectory of a Linear System with Jump Drift. Autom Remote Control 83, 520–535 (2022). https://doi.org/10.1134/S0005117922040026

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117922040026

Keywords

Navigation