Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

The rate of convergence of the generalized gradient descent method

  • 192 Accesses

  • 10 Citations

This is a preview of subscription content, log in to check access.


  1. 1.

    N. Z. Shor and M. B. Shchepakin, “A solution algorithm of a two-stage stochastic programming problem,” Kibernetika [Cybemetics], no. 3, 1968.

  2. 2.

    S. M. Movshovich, “Random search and gradient method in optimization problems,” Izvestiya AN SSSR, Tekhnicheskaya Kibernerika, no. 6, 1966.

  3. 3.

    B. T. Polyak, “Gradient methods of minimizing functionals,” Zhurnal vycheslitel'noi matematiki i matematicheskoi fiziki, vol. 3, no. 4, 1963.

Download references

Additional information

Kibernetika, Vol. 4, No. 3, pp. 98–99, 1968

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Shor, N.Z. The rate of convergence of the generalized gradient descent method. Cybern Syst Anal 4, 79–80 (1968). https://doi.org/10.1007/BF01073933

Download citation


  • Operating System
  • Artificial Intelligence
  • System Theory
  • Gradient Descent
  • Descent Method