Computational Optimization and Applications

, Volume 38, Issue 3, pp 305–327 | Cite as

Nonmonotone projected gradient methods based on barrier and Euclidean distances

  • Alfred Auslender
  • Paulo J. S. Silva
  • Marc Teboulle


We consider nonmonotone projected gradient methods based on non-Euclidean distances, which play the role of barrier for a given constraint set. Our basic scheme uses the resulting projection-like maps that produces interior trajectories, and combines it with the recent nonmonotone line search technique originally proposed for unconstrained problems by Zhang and Hager. The combination of these two ideas leads to produce a nonmonotone scheme for constrained nonconvex problems, which is proven to converge to a stationary point. Some variants of this algorithm that incorporate spectral steplength are also studied and compared with classical nonmonotone schemes based on the usual Euclidean projection. To validate our approach, we report on numerical results solving bound constrained problems from the CUTEr library collection.


Convex and nonconvex optimization Projected gradient algorithms Nonmonotone methods Spectral stepsizes Barrier proximal distances Convergence analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Auslender, A., Teboulle, M.: Interior Gradient and Epsilon-subgradient methods for constrained convex minimization. Math. Oper. Res. 29, 1–26 (2004) zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Auslender, A., Teboulle, M.: Interior projection-like methods for monotone variational inequalities. Math. Program. 104, 39–68 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Auslender, A., Teboulle, M., Ben-Tiba, S.: Interior proximal and multiplier methods based on second order homogeneous kernels. Math. Oper. Res. 24, 645–668 (1999) zbMATHMathSciNetCrossRefGoogle Scholar
  5. 5.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Ben-Tal, A., Margalit, T., Nemirovsky, A.: The ordered subsets mirror descent optimization method with applications to tomography. SIAM J. Optim. 12, 79–108 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Bertsekas, D.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999) zbMATHGoogle Scholar
  8. 8.
    Birgin, E.G., Martinez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 1196–1211 (2000) zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Birgin, E.G., Martinez, J.M., Raydan, M.: Algorithm 813:SPG: Software for convexly constrained optimization. ACM Trans. Math. Softw. 27, 340–349 (2001) zbMATHCrossRefGoogle Scholar
  10. 10.
    Dai, Y.H.: A nonmonotone conjugate gradient algorithm for unconstrained optimization. J. System Sci. Complex 15, 139–145 (2002) zbMATHGoogle Scholar
  11. 11.
    Dolan, E., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002) zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Grippo, L., Lampariello, F., Lucidi, S.: A monotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986) zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Grippo, L., Lampariello, F., Lucidi, S.: A truncated Newton method with nonmonotone line search for unconstrained minimization. J. Optim. Theory Appl. 60, 401–419 (1989) zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Lucidi, S., Rochetich, F., Roma, M.: Curvilinear stabilization techniques for truncated Newton methods in large -scale unconstrained optimization. SIAM J. Optim. 8, 916–939 (1998) zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987) Google Scholar
  17. 17.
    Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes in C: the Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1992) Google Scholar
  18. 18.
    Raydan, E.R.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997) zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) zbMATHGoogle Scholar
  20. 20.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, New York (1998) zbMATHGoogle Scholar
  21. 21.
    Silva, P.J.S., Eckstein, J., Humes, C.: Rescaling and stepsize selection in proximal methods using separable generalized distances. SIAM J. Optim. 12, 238–261 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    Zhang, H., Hager, W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14, 1043–1056 (2004) zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Alfred Auslender
    • 1
  • Paulo J. S. Silva
    • 2
  • Marc Teboulle
    • 3
  1. 1.Institut Camille JordanUniversity LyonLyon 1France
  2. 2.Instituto de Matemática e EstatísticaUniversity of São PauloSao PauloBrazil
  3. 3.School of Mathematical SciencesTel-Aviv UniversityRamat-AvivIsrael

Personalised recommendations