Abstract
For a controlled linear stochastic differential system, we consider the problem of tracking the jumping state of an additive input action determining the current stabilization direction (drift). The tracking objective, which is to stabilize the system near the varying drift, is formalized by a quadratic performance functional. The input action defines a continuous-time Markov chain. The problem is considered in the cases of complete and incomplete information. Dynamic programming is used to solve it in both cases. The solution of the Bellman equation in the first case is obtained based on the properties of a finite-dimensional chain, and in the second case, based on the principle of separation of control and state estimation problems provided by the Wonham filter estimate and the properties of the quadratic performance criterion. A numerical experiment uses an applied model describing the position of a simple mechanical drive. The results of calculations confirming the applicability of the solutions obtained, as well as ways to overcome the difficulties of their numerical implementation, are presented and discussed in detail.
Similar content being viewed by others
REFERENCES
Elliott, R.J., Aggoun, L., and Moore, J.B., Hidden Markov Models: Estimation and Control, New York: Springer-Verlag, 1995.
Bar-Shalom, Y., Willett, P.K., and Tian, X., Tracking and Data Fusion: a Handbook of Algorithms, Storrs, Conn.: YBS Publ., 2011.
Rishel, R., A strong separation principle for stochastic control systems driven by a hidden Markov model, SIAM J. Control Optim., 1994, vol. 32, no. 4, pp. 1008–1020.
BeneŢ, V., Quadratic approximation by linear systems controlled from partial observations, in Stochastic Analysis, Mayer-Wolf, E., Merzbach, E., and Shwartz, A., Eds., New York: Academic Press, 1991, pp. 39–50.
Helmes, K. and Rishel, R., The solution of a partially observed stochastic optimal control problem in terms of predicted miss, IEEE Trans. Autom. Control, 1992, vol. 37, no. 9, pp. 1462–1464.
Benes, V., Karatzas, I., Ocone, D., and Wang, H., Control with partial observations and an explicit solution of Mortensen’s equation, Appl. Math. Optim., 2004, no. 49, pp. 217–239. https://doi.org/10.1007/s00245-003-0788-0
Liptser, R.S. and Shiryaev, A.N., Statistics of Random Processes. II. Applications, Berlin: Springer-Verlag, 2001.
Fleming, W.H. and Rishel, R.W., Deterministic and Stochastic Optimal Control, New York: Springer-Verlag, 1975.
Athans, M. and Falb, P.L., Optimal Control: an Introduction to the Theory and Its Applications, New York–Sydney: McGraw-Hill, 1966.
Bosov, A.V., The problem of controlling the linear output of a nonlinear uncontrollable stochastic differential system by the square criterion, J. Comput. Syst. Sci. Int., 2021, vol. 60, no. 5, pp. 719–739.
Liptser, R.Sh. and Shiryaev, A.N., Teoriya martingalov (Martingale Theory), Moscow: Nauka, 1986.
Miller, B.M., Miller, G.B., and Semenikhin, K.V., Methods to design optimal control of Markov process with finite state set in the presence of constraints, Autom. Remote Control, 2011, vol. 72, no. 2, pp. 323–341.
Wonham, W.M., On the separation theorem of stochastic control, SIAM J. Control, 1968, vol. 6, no. 2, pp. 312–326.
Fel’dbaum, A.A., Osnovy teorii optimal’nykh avtomaticheskikh sistem (Fundamentals of the Theory of Optimal Automatic Systems), Moscow: Nauka, 1966.
Yin, G., Zhang, Q., and Liu, Y., Discrete-time approximation of Wonham filters, J. Control Theory Appl., 2004, no. 2, pp. 1–10.
Borisov, A.V., Numerical schemes for filtering Markov jump processes from discretized observations II: the case of additive noise, Inf. Primen., 2020, vol. 14, no. 1, pp. 17–23.
Borisov, A.V., L1-optimal filtering of Markov jump processes. II. Numerical analysis of particular realizations schemes, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2160–2180.
Borisov, A. and Sokolov, I., Optimal filtering of Markov jump processes given observations with state-dependent noises: exact solution and stable numerical schemes, Mathematics, 2020, vol. 8, no. 4 (506).
Funding
The work was supported by project no. 075-15-2020-799 of the Ministry of Science and Higher Education of the Russian Federation. The work was carried out using the infrastructure of the Shared Use Center “High-Performance Computing and Big Data” (SUC “Computer Science” of the Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences).
Author information
Authors and Affiliations
Corresponding author
Additional information
Translated by V. Potapchouck
Rights and permissions
About this article
Cite this article
Bosov, A.V. Stabilization and Tracking of the Trajectory of a Linear System with Jump Drift. Autom Remote Control 83, 520–535 (2022). https://doi.org/10.1134/S0005117922040026
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117922040026