Skip to main content
Log in

Optimal Finite-Dimensional Controller of the Stochastic Differential Object’s State by Its Output. I. Incomplete Precise Measurements

  • CONTROL IN STOCHASTIC SYSTEMS AND UNDER UNCERTAINTY CONDITIONS
  • Published:
Journal of Computer and Systems Sciences International Aims and scope

Abstract

The well-known problem of synthesizing the optimal on the average and on given time interval of the inertial control law for a continuous stochastic object if only a part of its state variables are accurately measured is considered. Due to the practical unrealizability of its classical infinite-dimensional Stratonovich-Mortensen solution, it is proposed to limit ourselves to optimizing the structure of a finite-dimensional dynamic controller, whose order is chosen by the user. This finiteness allows using a truncated version of the a posteriori probability density that satisfies a deterministic partial differential integrodifferential equation. Using the Krotov extension principle, sufficient optimality conditions for the structural functions of the controller and the Lagrange–Pontryagin equation for finding their extremals are obtained. It is shown that in particular cases of the absence of measurements, complete measurements and taking into account only the values of incomplete measurements, the proposed controller turns out to be static (inertialess), and the relations for its synthesis coincide with the known ones. For a dynamic controller, algorithms for finding each of its structural functions are given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. R. L. Stratonovich, “Toward the theory of optimal control. Sufficient coordinates,” Avtom. Telemekh., No. 7, 910–917 (1962).

  2. R. E. Mortensen, “Stochastic optimal control with noisy observations,” Int. J. Control 4 (5), 455–466 (1966).

    Article  MathSciNet  Google Scholar 

  3. M. H. A. Davis and P. P. Varaiya, “Dynamic programming conditions for partially observable stochastic systems,” SIAM J. Control 11 (2), 226–262 (1973).

    Article  MathSciNet  Google Scholar 

  4. Yu. I. Paraev, Introduction to Statistical Dynamics of Control and Filtering Processes (Sov. radio, Moscow, 1976) [in Russian].

  5. V. E. Benes and I. Karatzas, “On the relation of Zakai’s and Mortensen’s equations,” SIAM J. Control Optim. 21 (3), 472–489 (1983).

    Article  MathSciNet  Google Scholar 

  6. A. Bensoussan, Stochastic Control of Partially Observable Systems (Cambridge University Press, Cambridge, 1992).

    Book  Google Scholar 

  7. A. V. Panteleev and V. V. Semenov, “Optimal control of nonlinear probabilistic systems by an incomplete state vector,” Avtom. Telemekh., No. 1, 91–100 (1984).

  8. A. V. Panteleev and K. A. Rybakov, “Optimal continuous stochastic control systems with incomplete feedback: Approximate synthesis,” Autom. Remote Control 79 (1), 103–116 (2018).

    Article  MathSciNet  Google Scholar 

  9. M. M. Khrustalev, “Conditions for Nash equilibrium in stochastic differential games with players incompletely aware of the state,” Izv. Ross. Akad. Nauk: Teor. Sist. Upr., No. 6, 194–208 (1995).

  10. W. M. Wonham, “On the separation theorem of stochastic control,” SIAM J. Control 6 (2), 312–326 (1968).

    Article  MathSciNet  Google Scholar 

  11. A. V. Bosov, “Application of conditional-optimal filter for synthesis of suboptimal control in the problem of optimizing the output of a nonlinear differential stochastic system,” Autom. Remote Control 81 (11), 1963–1973 (2020).

    Article  MathSciNet  Google Scholar 

  12. A. V. Bosov, “The problem of controlling the linear output of a nonlinear uncontrollable stochastic differential system by the square criterion,” J. Comput. Syst. Sci. Int. 60 (5), 719–739 (2021).

    Article  MathSciNet  Google Scholar 

  13. E. A. Rudenko, “Operational-optimal finite-dimensional dynamic controller of the stochastic differential plant’s state according to its output: I. General nonlinear case,” J. Comput. Syst. Sci. Int. 61 (5), 724–740 (2022).

    Article  MathSciNet  Google Scholar 

  14. I. I. Gikhman and A. V. Skorokhod, Introduction to the Theory of Random Processes (Nauka, Moscow, 1977) [in Russian].

    Google Scholar 

  15. A. V. Panteleev, E. A. Rudenko, and A. S. Bortakovskii, Nonlinear Control Systems: Description, Analysis, and Synthesis (Vuzovskaya kniga, Moscow, 2008) [in Russian].

  16. V. I. Tikhonov and M. A. Mironov, Markov Processes (Sov. radio, Moscow, 1977) [in Russian].

  17. E. A. Rudenko, “Optimal structure of continuous nonlinear reduced-order Pugachev filter,” J. Comput. Syst. Sci. Int. 52 (6), 866–892 (2013).

    Article  MathSciNet  Google Scholar 

  18. V. F. Krotov and V. I. Gurman, Methods and Problems of Optimal Control (Nauka, Moscow, 1973) [in Russian].

    Google Scholar 

  19. V. I. Gurman, The Expansion Principle in Problems of Control (Nauka, Moscow, 1997) [in Russian].

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to E. A. Rudenko.

Ethics declarations

The author declares that he has no conflicts of interest.

Additional information

Publisher’s Note.

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rudenko, E.A. Optimal Finite-Dimensional Controller of the Stochastic Differential Object’s State by Its Output. I. Incomplete Precise Measurements. J. Comput. Syst. Sci. Int. 62, 636–651 (2023). https://doi.org/10.1134/S1064230723040111

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064230723040111

Navigation