Abstract
We study the implementation of the optimal control strategy obtained in [1] and supplemented in [2]. The algorithm for optimal stabilization of a linear stochastic differential system in a position determined by a piecewise constant Markov drift has been tested in a substantial number of model experiments. The drift value is observed indirectly; i.e., the control problem is solved in the statement with incomplete information. Practical implementation is complicated by the instability of Euler–Maruyama numerical schemes that implement the Wonham filter, which is a key element of the optimal control strategy. To perform calculations, the Wonham filter is approximated by stable schemes based on the optimal filtering of Markov chains by discretized observations [3]. These schemes have different implementation complexity and orders of accuracy. The paper presents a comparative analysis of the control performance for various stable approximations to the Wonham filter and its typical implementation using the Euler–Maruyama scheme. In addition, three versions of discretized filters are compared and final recommendations are given for their application in the problem of stabilizing a system with hopping drift.
Similar content being viewed by others
REFERENCES
Borisov, A., Bosov, A., and Miller, G., Optimal Stabilization of Linear Stochastic System with Statistically Uncertain Piecewise Constant Drift, Mathematics, 2022, vol. 10, no. 2 (84).
Bosov, A.V., Stabilization and trajectory tracking of linear system with jumping drift, Autom. Remote Control, 2022, vol. 83, no. 4, pp. 1963–1973.
Borisov, A.V., L1-optimal filtering of Markov jump processes. II. Numerical analysis of particular realizations schemes, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2160–2180.
Athans, M. and Falb, P.L., Optimal Control: an Introduction to the Theory and Its Applications, New York–Sydney: McGraw Hill, 1966.
BeneŢ, V., Quadratic approximation by linear systems controlled from partial observations, Stochastic Analysis, Mayer-Wolf, E., Merzbach, E., and Shwartz, A., Eds., New York: Academic Press, 1991, pp. 39–50.
Helmes, K. and Rishel, R., The solution of a partially observed stochastic optimal control problem in terms of predicted miss, IEEE Trans. Autom. Control, 1992, vol. 37, no. 9, pp. 1462–1464.
Benes, V., Karatzas, I., Ocone, D., and Wang, H., Control with partial observations and an explicit solution of Mortensen’s equation, Appl. Math. Optim., 2004, no. 49, pp. 217–239.
Rishel, R., A strong separation principle for stochastic control systems driven by a hidden Markov model, SIAM J. Control Optim., 1994, vol. 32, no. 4, pp. 1008–1020.
Elliott, R.J., Aggoun, L., and Moore, J.B., Hidden Markov Models: Estimation and Control, New York: Springer-Verlag, 1995.
Kloeden, P.E. and Platen, E., Numerical Solution of Stochastic Differential Equations, Berlin: Springer, 1992.
Yin, G., Zhang, Q., and Liu, Y., Discrete-time approximation of Wonham filters, J. Control Theory Appl., 2004, no. 2, pp. 1–10.
Kushner, H.J., Probability Methods for Approximations in Stochastic Control and for Elliptic Equations, New York: Academic Press, 1977.
Funding
This work was supported by the Russian Science Foundation, project no. 22-28-00588, . The work was carried out using the infrastructure of the Center for Collective Use “High Performance Computing and Big Data” (CCU “Computer Science” at the Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Moscow).
Author information
Authors and Affiliations
Corresponding authors
Additional information
Translated by V. Potapchouck
Rights and permissions
About this article
Cite this article
Borisov, A.V., Bosov, A.V. Practical Implementation of the Solution of the Stabilization Problem for a Linear System with Discontinuous Random Drift by Indirect Observations. Autom Remote Control 83, 1417–1432 (2022). https://doi.org/10.1134/S0005117922090065
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117922090065