Skip to main content
Log in

Optimization of Subgradient Method Parameters Based on Rank-Two Correction of Metric Matrices

  • Published:
Journal of Applied and Industrial Mathematics Aims and scope Submit manuscript

Abstract

We establish a relaxation subgradient method (RSM) that includes parameter optimization using rank-two correction of metric matrices with a structure similar to that in quasi-Newtonian (QN) methods. The metric matrix transformation consists in suppressing orthogonal and amplifying collinear components of the minimum-length subgradient vector. The problem of constructing a metric matrix is stated as a problem of solving an involved system of inequalities. Solving such a system is based on a new learning algorithm. An estimate for its convergence rate is obtained depending on the parameters of the subgradient set. A new RSM has been developed and investigated on this basis. Computational experiments on complex large-dimension functions confirm the efficiency of the algorithm proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. N. Z. Shor, “Application of the gradient descent method for solving network transportation problems,” in Proc. Sci. Semin. Theor. Appl. Probl. Cybern. Oper. Res. (Nauchn. Sov. Kibern. Akad. Nauk UkrSSR, Kiev, 1962), no. 1, 9–17.

  2. B. T. Polyak, “A general method for solving extremal problems,” Dokl. Akad. Nauk SSSR 174 (1), 33–36 (1967).

    MathSciNet  Google Scholar 

  3. B. T. Polyak, Introduction to Optimization (Nauka, Moscow, 1983) [in Russian].

    MATH  Google Scholar 

  4. P. Wolfe, “Note on a method of conjugate subgradients for minimizing nondifferentiable functions,” Math. Program. 7 (1), 380–383 (1974).

    Article  MathSciNet  MATH  Google Scholar 

  5. E. G. Gol’shtein, A. S. Nemirovskii, and Yu. E. Nesterov, “The level method and its generalizations and applications,” Ekon. Mat. Metody 31 (3), 164–180 (1983).

    MATH  Google Scholar 

  6. Yu. E. Nesterov, “Universal gradient methods for convex optimization problems,” Math. Program. Ser. A 152, 381–404 (2015).

    Article  MathSciNet  MATH  Google Scholar 

  7. A. V. Gasnikov and Yu. E. Nesterov, Universal Method for Stochastic Composite Optimization (Cornell Univ., Ithaca, NY, 2016). Cornell Univ. Libr. e-Print Archive .

  8. H. Ouyang and A. Gray, “Stochastic smoothing for nonsmooth minimizations: Accelerating SGD by exploiting structure,” in Proc. 29th Int. Conf. Mach. Learn. (Edinburgh, Scotland, June 26–July 1, 2012) (Omnipress, Madison, WI, 2012), pp. 33–40.

  9. D. Boob, Q. Deng, and G. Lan, “Stochastic first-order methods for convex and nonconvex functional constrained optimization,” Math. Program. 2022 (in press). Available at https://doi.org/10.1007/s10107-021-01742-y . Accessed June 17, 2022.

  10. G. Lan, First-Order and Stochastic Optimization Methods for Machine Learning (Springer, Cham, 2020).

    Book  MATH  Google Scholar 

  11. S. Ghadimi and G. Lan, “Accelerated gradient methods for nonconvex nonlinear and stochastic programming,” Math. Program. 156 (1–2), 59–99 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  12. C. Fang, C. J. Li, Z. Lin, and T. Zhang, “Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator,” in Adv. Neural Inf. Process. Syst. 31, 32nd Annu. Conf. (Montreal, Canada, December 3–8, 2018) (Curran Assoc., Red Hook, NY, 2018), pp. 687–697.

  13. A. S. Nemirovskii and D. B. Yudin, Complexity of Problems and Efficiency of Methods in Optimization (Nauka, Moscow, 1979) [in Russian].

    Google Scholar 

  14. N. Z. Shor, Minimization Methods for Nondifferentiable Functions and Applications (Nauk.Dumka, Kiev, 1979) [in Russian].

    Google Scholar 

  15. H. Cao, Y. Song, and K. Khan, “Convergence of subtangent-based relaxations of non-linear programs,” Processes 7 (4), 221 (2019).

    Article  Google Scholar 

  16. B. T. Polyak, “Minimization of nonsmooth functionals,” Zh. Vychisl. Mat. Mat. Fiz. 9 (3), 509–521 (1969) [Comput. Math. Math. Phys. 9 (3), 14–29 (1969)].

    Article  MathSciNet  MATH  Google Scholar 

  17. V. N. Krutikov, N. S. Samoilenko, and V. V. Meshechkin, “On the properties of the method of minimization for convex functions with relaxation on the distance to extremum,” Avtom. Telemekh. (1), 126–137 (2019) [Autom. Remote Control 80 (1), 102–111 (2019)].

    Article  MathSciNet  MATH  Google Scholar 

  18. V. F. Dem’yanov and L. V. Vasil’ev, Non-differentiable Optimization (Nauka, Moscow, 1981) [in Russian].

    Google Scholar 

  19. C. Lemarechal, “An extension of Davidon methods to non-differentiable problems,” Math. Program. Study 3, 95–109 (1975).

    Article  MathSciNet  MATH  Google Scholar 

  20. V. N. Krutikov and T. V. Petrova, “Relaxation method of minimization with space extension in the subgradient direction,” Ekon. Mat. Metody 39 (1), 106–119 (2003).

    MATH  Google Scholar 

  21. V. N. Krutikov and T. A. Gorskaya, “A family of subgradient relaxation methods with rank \( 2 \) correction of metric matrices,” Ekon. Mat. Metody 45 (4), 37–80 (2009).

    Google Scholar 

  22. V. A. Skokov, “Note on minimization methods employing space stretching,” Kibern. Sist. Anal. (4), 115–117 (1974) [Cybern. Syst. Anal. 10 (4), 689–692 (1974)].

    Article  Google Scholar 

  23. V. N. Krutikov and N. S. Samoilenko, “On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes,” Vestn. Tomsk. Gos. Univ. Ser. Mat. Mekh. (55), 22–37 (2018).

  24. J. Nocedal and S. J. Wright, Numerical Optimization (Springer, New York, 2006).

    MATH  Google Scholar 

  25. M. Avriel, Nonlinear Programming: Analysis and Methods (Dover, Mineola, 2003).

    MATH  Google Scholar 

  26. E. A. Nurminskii and D. Tien, “Method of conjugate subgradients with constrained memory,” Avtom. Telemekh. (4), 67–80 (2014) [Autom. Remote Control 75 (4), 646–656 (2014)].

    Article  MathSciNet  MATH  Google Scholar 

  27. Ya. Z. Tsypkin, Basics of Theory of Learning Systems (Nauka, Moscow, 1970) [in Russian].

    Google Scholar 

  28. E. L. Zhukovskii and R. Sh. Liptser, “A recurrence method for computing the normal solutions of linear algebraic equations,” Zh. Vychisl. Mat. Mat. Fiz. 12 (4), 843–857 (1972) [Comput. Math. Math. Phys. 12 (4), 1–18 (1972)].

    Article  MathSciNet  Google Scholar 

  29. V. N. Krutikov, L. A. Kazakovtsev, and V. L. Kazakovtsev, “Non-smooth regularization in radial artificial neural networks,” IOP Conf. Ser. Mater. Sci. Eng. 450 (4), 042010 (2018).

    Article  Google Scholar 

  30. V. N. Krutikov, L. A. Kazakovtsev, G. Sh. Shkaberina, and V. L. Kazakovtsev, “New method of training two-layer sigmoid neural networks using regularization,” IOP Conf. Ser. Mater. Sci. Eng. 537 (4), 042055 (2019).

    Article  Google Scholar 

  31. R. J. Tibshirani, “Regression shrinkage and selection via the Lasso,” J. R. Stat. Soc. Ser. B 58, 267–288 (1996).

    MathSciNet  MATH  Google Scholar 

  32. R. Frostig, R. Ge, S. M. Kakade, and A. Sidford, “Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization,” Proc. Mach. Learn. Res. 37, 2540–2548 (2015).

    Google Scholar 

Download references

Funding

This research was supported by the Ministry of Science and Higher Education of the Russian Federation, state contract no. FEFE–2020–0013. The research by the second author was supported by the Science Foundation of the Republic of Serbia, grant no. 7750185, and the Ministry of Education, Science, and Technological Development of the Republic of Serbia, contract no. 451–03–68/2020–14/200124.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to V. N. Krutikov, P. S. Stanimirovi’c, O. N. Indenko, E. M. Tovbis or L. A. Kazakovtsev.

Additional information

Translated by V. Potapchouck

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Krutikov, V.N., Stanimirovi’c, P.S., Indenko, O.N. et al. Optimization of Subgradient Method Parameters Based on Rank-Two Correction of Metric Matrices. J. Appl. Ind. Math. 16, 427–439 (2022). https://doi.org/10.1134/S1990478922030073

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1990478922030073

Keywords

Navigation