Conclusion
As we see from Theorem 2, if the GDDS process has the given properties, it ensures convergence to the minimum of the functional like that of a geometrical progression. In particular, the necessary properties of the GDDS process are ensured by the conditions of Theorem 1 if m=m* (see Theorems 2, 3 of [1]). When we reduce a problem of solving a system of equations or inequalities to a minimization problem, m* is unknown. Thus, the GDDS method must be effective in solving systems of nonlinear equations and inequalities. The essential feature of this method is that the guaranteed denominator of the geometrical progression\(q = {1 \mathord{\left/ {\vphantom {1 {\sqrt[n]{\alpha }}}} \right. \kern-\nulldelimiterspace} {\sqrt[n]{\alpha }}}\) is independent of the degree of pittedness off(x), which makes it possible to successfully use it to solve systems of equations which are nearly degenerate.
As a increases the effectiveness of the method in general decreases, both in connection with the fact that 1 becomes close to unity and because a large memory is required to store the matrix Bk. The modification proposed when m* is unknown can be used to solve minimax problems in the general problem of convex programming by reducing it to the solution of a sequence of convex inequalities or by using the method of penalty functions with expansion schemes.
Similar content being viewed by others
Literature Cited
N. Z. Shor, “The use of the operation of space dilatation in problems of minimizing convex functions,” Kibernetika, No. 1, Kiev (1970).
F. R. Gantmakher, The Theory of Matrices [in Russian] GITTL, Moscow (1953).
Additional information
Translated from Kibernetika, No. 2, pp. 80–85, March–April, 1970.
Rights and permissions
About this article
Cite this article
Shor, N.Z. Convergence rate of the gradient descent method with dilatation of the space. Cybern Syst Anal 6, 102–108 (1970). https://doi.org/10.1007/BF01070506
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF01070506