Advertisement

A unified convergence framework for nonmonotone inexact decomposition methods

  • Leonardo GalliEmail author
  • Alessandro Galligari
  • Marco Sciandrone
Article
  • 33 Downloads

Abstract

In this work we propose a general framework that provides a unified convergence analysis for nonmonotone decomposition algorithms. The main motivation to embed nonmonotone strategies within a decomposition approach lies in the fact that enforcing the reduction of the objective function could be unnecessarily expensive, taking into account that groups of variables are individually updated. We define different search directions and line searches satisfying the conditions required by the presented nonmonotone decomposition framework to obtain global convergence. We employ a set of large-scale network equilibrium problems as a computational example to show the advantages of a nonmonotone algorithm over its monotone counterpart. In conclusion, a new smart implementation for decomposition methods has been derived to solve numerical issues on large-scale partially separable functions.

Keywords

Decomposition algorithms Nonmonotone techniques Global convergence Gauss–Seidel rule Large-scale problems Numerical issues 

Notes

References

  1. 1.
    Armijo, L.: Minimization of functions having lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1966)MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Barr, R., Gilbert, E.: Some efficient algorithms for a class of abstract optimization problems arising in optimal control. IEEE Trans. Autom. Control 14(6), 640–652 (1969)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods, vol. 23. Prentice Hall, Englewood Cliffs (1989)zbMATHGoogle Scholar
  4. 4.
    Bertsekas, D.P.: Nonlinear Programming, 2nd edn, pp. 02178–9998. Athena Scientific, Belmont (1999)Google Scholar
  5. 5.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10(4), 1196–1211 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Bomze, I.M., Rinaldi, F., Buló, S.R.: First-order methods for the impatient: support identification in finite time with convergent Frank–Wolfe variants. SIAM J. Optim. 29(3), 2211–2226 (2019).  https://doi.org/10.1137/18M1206953 MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Bonettini, S.: Inexact block coordinate descent methods with application to non-negative matrix factorization. IMA J. Numer. Anal. 31(4), 1431–1452 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Buzzi, C., Grippo, L., Sciandrone, M.: Convergent decomposition techniques for training RBF neural networks. Neural Comput. 13(8), 1891–1920 (2001)zbMATHCrossRefGoogle Scholar
  9. 9.
    Cassioli, A., Di Lorenzo, D., Sciandrone, M.: On the convergence of inexact block coordinate descent methods for constrained optimization. Eur. J. Oper. Res. 231(2), 274–281 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Dafermos, S.C., Sparrow, F.T.: The traffic assignment problem for a general network. J. Res. Natl. Bureau Stand. B 73(2), 91–118 (1969)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    De Leone, R., Gaudioso, M., Grippo, L.: Stopping criteria for linesearch methods without derivatives. Math. Program. 30(3), 285–300 (1984)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    De Luca, T., Facchinei, F., Kanzow, C.: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program., Ser. B 75(3), 407–439 (1996)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Di Lorenzo, D., Galligari, A., Sciandrone, M.: A convergent and efficient decomposition method for the traffic assignment problem. Comput. Optim. Appl. 60(1), 151–170 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Galli, L., Kanzow, C., Sciandrone, M.: A nonmonotone trust-region method for generalized nash equilibrium and related problems with strong convergence properties. Comput. Optim. Appl. 69(3), 629–652 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    García, R., Marín, A., Patriksson, M.: Column generation algorithms for nonlinear optimization, i: Convergence analysis. Optimization 52(2), 171–200 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Griewank, A., Toint, P.: Local convergence analysis for partitioned quasi-newton updates. Numer. Math. 39(3), 429–448 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Griewank, A., Toint, P.: Partitioned variable metric updates for large structured optimization problems. Numer. Math. 39(1), 119–137 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Grippo, L., Sciandrone, M.: Globally convergent block-coordinate techniques for unconstrained optimization. Optim. Methods Softw. 10(4), 587–637 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Grippo, L., Sciandrone, M.: Nonmonotone globalization techniques for the barzilai-borwein gradient method. Comput. Optim. Appl. 23(2), 143–169 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Grippo, L., Sciandrone, M.: Nonmonotone derivative-free methods for nonlinear equations. Comput. Optim. Appl. 37(3), 297–328 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Hsu, C.-W., Lin, C.-J.: A simple decomposition method for support vector machines. Mach. Learn. 46(1–3), 291–314 (2002)zbMATHCrossRefGoogle Scholar
  23. 23.
    Kao, C., Lee, L.-F., Pitt, M.M.: Simulated maximum likelihood estimation of the linear expenditure system with binding non-negativity constraints. Ann. Econ. Finance 2(1), 203–223 (2001)Google Scholar
  24. 24.
    Li, C., Yin, W., Jiang, H., Zhang, Y.: An efficient augmented lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 56(3), 507–530 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  25. 25.
    Lin, C.-J.: On the convergence of the decomposition method for support vector machines. IEEE Trans. Neural Netw. 12(6), 1288–1298 (2001)CrossRefGoogle Scholar
  26. 26.
    Lin, C.-J.: A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Trans. Neural Netw. 13(5), 1045–1052 (2002)CrossRefGoogle Scholar
  27. 27.
    Lin, C.-J., Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: Decomposition algorithm model for singly linearly-constrained problems subject to lower and upper bounds. J. Optim. Theory Appl. 141(1), 107–126 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Raydan, M.: The barzilaiai and borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7(1), 26–33 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    Serafini, T., Zanghirati, G., Zanni, L.: Gradient projection methods for quadratic programs and applications in training support vector machines. Optim. Methods Softw. 20(2–3), 347–372 (2005)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Tseng, P., Yun, S.: A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training. Comput. Optim. Appl. 47(2), 179–206 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Zanni, L.: An improved gradient projection-based decomposition technique for support vector machines. CMS 3(2), 131–145 (2006)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Information EngineeringUniversity of FlorenceFlorenceItaly

Personalised recommendations