Skip to main content
Log in

Dynamic optimization by the method of generalized quasigradients

  • Published:
Cybernetics Aims and scope

Conclusion

Since the method of generalized quasigradients is fairly general [3, 4], the results obtained with respect to the convergence of the investigated methods of dynamic optimization are applicable to a wide range of extremal problems with perhaps nondifferentiable criteria. Although the proof of the convergence was carried out for deterministic functionals as a particular case, it is easy to obtain from these results the convergence also in the case of minimization of probabilistic, dynamic functionals, when the minimum of a time-dependent regression function is sought. Such problems occur in pattern recognition, and in the development of adaptive self-organizing systems under nonstationary conditions; they can be solved by dynamic methods of stochastic approximation. By analogy with [3], the above theorems on convergence of algorithms of dynamic optimization by the method of generalized quasigradients can be used for proving the convergence in the case of minimization of dynamic problems of large dimension by the method of statistical gradients, the solution of game-type dynamic stochastic problems, etc.

In the case that it is possible to ensure a sufficiently high rate of convergence for identification of the parameters of the drift operator, it is especially convenient to use the combined method of dynamic optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Literature Cited

  1. Ya. Z. Tsypkin, A. I. Kaplinskii, and K. A. Larionov, “Algorithms of adaptation and learning under nonstationary conditions,” Izv. Akad. Nauk SSSR, Tekh. Kibernetika, No. 5 (1970).

  2. V. Dupac, “A dynamic stochastic approximation method,” Ann. Math. Stat.,36, No. 6 (1965).

    Google Scholar 

  3. Yu. M. Ermol'ev., “A method of generalized stochastic gradients and stochastic quasi-Féjer sequences,” Kibernetika, No. 2 (1969).

  4. Yu. M. Ermol'ev and N. Z. Shor, “On minimization of nondifferentiable functions,” Kibernetika, No. 2 (1967).

  5. Yu. M. Ermol'ev, “On convergence of random quasi-Féjer sequences,” Kibernetika, No. 4 (1971).

  6. O. V. Guseva., “On the rate of convergence of the method of generalized stochastic gradients” Kibernetika, No. 5 (1971).

Download references

Authors

Additional information

Translated from Kibernetika, No. 3, pp. 73–79, May–June, 1975

Rights and permissions

Reprints and permissions

About this article

Cite this article

Meleshko, V.I. Dynamic optimization by the method of generalized quasigradients. Cybern Syst Anal 11, 421–428 (1975). https://doi.org/10.1007/BF01069469

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01069469

Keywords

Navigation