Skip to main content
Log in

Practical Implementation of the Solution of the Stabilization Problem for a Linear System with Discontinuous Random Drift by Indirect Observations

  • STOCHASTIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

We study the implementation of the optimal control strategy obtained in [1] and supplemented in [2]. The algorithm for optimal stabilization of a linear stochastic differential system in a position determined by a piecewise constant Markov drift has been tested in a substantial number of model experiments. The drift value is observed indirectly; i.e., the control problem is solved in the statement with incomplete information. Practical implementation is complicated by the instability of Euler–Maruyama numerical schemes that implement the Wonham filter, which is a key element of the optimal control strategy. To perform calculations, the Wonham filter is approximated by stable schemes based on the optimal filtering of Markov chains by discretized observations [3]. These schemes have different implementation complexity and orders of accuracy. The paper presents a comparative analysis of the control performance for various stable approximations to the Wonham filter and its typical implementation using the Euler–Maruyama scheme. In addition, three versions of discretized filters are compared and final recommendations are given for their application in the problem of stabilizing a system with hopping drift.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

REFERENCES

  1. Borisov, A., Bosov, A., and Miller, G., Optimal Stabilization of Linear Stochastic System with Statistically Uncertain Piecewise Constant Drift, Mathematics, 2022, vol. 10, no. 2 (84).

  2. Bosov, A.V., Stabilization and trajectory tracking of linear system with jumping drift, Autom. Remote Control, 2022, vol. 83, no. 4, pp. 1963–1973.

    Article  MathSciNet  Google Scholar 

  3. Borisov, A.V., L1-optimal filtering of Markov jump processes. II. Numerical analysis of particular realizations schemes, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2160–2180.

    Article  MathSciNet  MATH  Google Scholar 

  4. Athans, M. and Falb, P.L., Optimal Control: an Introduction to the Theory and Its Applications, New York–Sydney: McGraw Hill, 1966.

    MATH  Google Scholar 

  5. BeneŢ, V., Quadratic approximation by linear systems controlled from partial observations, Stochastic Analysis, Mayer-Wolf, E., Merzbach, E., and Shwartz, A., Eds., New York: Academic Press, 1991, pp. 39–50.

  6. Helmes, K. and Rishel, R., The solution of a partially observed stochastic optimal control problem in terms of predicted miss, IEEE Trans. Autom. Control, 1992, vol. 37, no. 9, pp. 1462–1464.

    Article  MathSciNet  MATH  Google Scholar 

  7. Benes, V., Karatzas, I., Ocone, D., and Wang, H., Control with partial observations and an explicit solution of Mortensen’s equation, Appl. Math. Optim., 2004, no. 49, pp. 217–239.

  8. Rishel, R., A strong separation principle for stochastic control systems driven by a hidden Markov model, SIAM J. Control Optim., 1994, vol. 32, no. 4, pp. 1008–1020.

    Article  MathSciNet  MATH  Google Scholar 

  9. Elliott, R.J., Aggoun, L., and Moore, J.B., Hidden Markov Models: Estimation and Control, New York: Springer-Verlag, 1995.

    MATH  Google Scholar 

  10. Kloeden, P.E. and Platen, E., Numerical Solution of Stochastic Differential Equations, Berlin: Springer, 1992.

  11. Yin, G., Zhang, Q., and Liu, Y., Discrete-time approximation of Wonham filters, J. Control Theory Appl., 2004, no. 2, pp. 1–10.

  12. Kushner, H.J., Probability Methods for Approximations in Stochastic Control and for Elliptic Equations, New York: Academic Press, 1977.

    MATH  Google Scholar 

Download references

Funding

This work was supported by the Russian Science Foundation, project no. 22-28-00588, . The work was carried out using the infrastructure of the Center for Collective Use “High Performance Computing and Big Data” (CCU “Computer Science” at the Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Moscow).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. V. Borisov or A. V. Bosov.

Additional information

Translated by V. Potapchouck

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Borisov, A.V., Bosov, A.V. Practical Implementation of the Solution of the Stabilization Problem for a Linear System with Discontinuous Random Drift by Indirect Observations. Autom Remote Control 83, 1417–1432 (2022). https://doi.org/10.1134/S0005117922090065

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117922090065

Keywords

Navigation