Skip to main content
Log in

An active-set algorithmic framework for non-convex optimization problems over the simplex

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we describe a new active-set algorithmic framework for minimizing a non-convex function over the unit simplex. At each iteration, the method makes use of a rule for identifying active variables (i.e., variables that are zero at a stationary point) and specific directions (that we name active-set gradient related directions) satisfying a new “nonorthogonality” type of condition. We prove global convergence to stationary points when using an Armijo line search in the given framework. We further describe three different examples of active-set gradient related directions that guarantee linear convergence rate (under suitable assumptions). Finally, we report numerical experiments showing the effectiveness of the approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bertsekas, D.P.: Projected Newton methods for optimization problems with simple constraints. SIAM J. Control Optim. 20(2), 221–246 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999)

    MATH  Google Scholar 

  3. Birgin, E.G., Martínez, J.M.: Large-scale active-set box-constrained optimization method with spectral projected gradients. Comput. Optim. Appl. 23(1), 101–125 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bomze, I.M., Budinich, M., Pardalos, P.M., Pelillo, M.: The maximum clique problem. In: Handbook of Combinatorial Optimization, pp. 1–74. Springer (1999)

  5. Bomze, I.M.: Evolution towards the maximum clique. J. Global Optim. 10(2), 143–164 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  6. Brás, C.P., Fischer, A., Júdice, J.J., Schönefeld, K., Seifert, S.: A block active set algorithm with spectral choice line search for the symmetric eigenvalue complementarity problem. Appl. Math. Comput. 294, 36–48 (2017)

    MathSciNet  MATH  Google Scholar 

  7. Buchheim, C., De Santis, M., Lucidi, S., Rinaldi, F., Trieu, L.: A feasible active set method with reoptimization for convex quadratic mixed-integer programming. SIAM J. Optim. 26(3), 1695–1714 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Clarkson, K.L.: Coresets, sparse greedy approximation, and the Frank–Wolfe algorithm. ACM Trans. Algorithms 6(4), 63 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Cristofari, A., De Santis, M., Lucidi, S., Rinaldi, F.: A two-stage active-set algorithm for bound-constrained optimization. J. Optim. Theory Appl. 172(2), 369–401 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. de Klerk, E.: The complexity of optimizing over a simplex, hypercube or sphere: a short survey. CEJOR 16(2), 111–125 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. De Santis, M., Di Pillo, G., Lucidi, S.: An active set feasible method for large-scale minimization problems with bound constraints. Comput. Optim. Appl. 53(2), 395–423 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. De Santis, M., Lucidi, S., Rinaldi, F.: A fast active set block coordinate descent algorithm for \(\ell _1\)-regularized least squares. SIAM J. Optim. 26(1), 781–809 (2016)

    MathSciNet  MATH  Google Scholar 

  13. di Serafino, D., Toraldo, G., Viola, M., Barlow, J.: A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables. SIAM J. Optim. 28(4), 2809–2838 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Di Pillo, G., Grippo, L.: A class of continuously differentiable exact penalty function algorithms for nonlinear programming problems. In: System Modelling and Optimization, pp. 246–256. Springer (1984)

  15. Facchinei, F., Lucidi, S.: Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems. J. Optim. Theory Appl. 85(2), 265–289 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  16. Facchinei, F., Fischer, A., Kanzow, C.: On the accurate identification of active constraints. SIAM J. Optim. 9(1), 14–32 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  17. Facchinei, F., Júdice, J., Soares, J.: An active set Newton algorithm for large-scale nonlinear programs with box constraints. SIAM J. Optim. 8(1), 158–186 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  18. Frank, M., Wolfe, P.: An algorithm for quadratic programming. Nav. Res. Logist. Q. 3(1–2), 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  19. Guélat, J., Marcotte, P.: Some comments on Wolfe’s ‘away step’. Math. Program. 35(1), 110–119 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hager, W.W., Zhang, H.: An active set algorithm for nonlinear optimization with polyhedral constraints. Sci. China Math. 59(8), 1525–1542 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  22. Hager, W.W., Zhang, H.: Projection onto a polyhedron that exploits sparsity. SIAM J. Optim. 26(3), 1773–1798 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. Iusem, A.N., Júdice, J.J., Sessa, V., Sarabando, P.: Splitting methods for the eigenvalue complementarity problem. Optim. Methods Softw. 34(6), 1184–1212 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lacoste-Julien, S., Jaggi, M.: An affine invariant linear convergence analysis for Frank-Wolfe algorithms. arXiv preprint arXiv:1312.7864 (2013)

  25. Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of Frank-Wolfe optimization variants. In: NIPS 2015 - Advances in Neural Information Processing Systems (2015)

  26. Luo, Z.Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30(2), 408–425 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  27. Moré, J.J., Toraldo, G.: Algorithms for bound constrained quadratic programming problems. Numer. Math. 55(4), 377–400 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  28. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87. Springer, Berlin (2013)

    MATH  Google Scholar 

  29. Nocedal, J., Wright, S.J.: Numerical optimization (2006)

  30. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  31. Wolfe, P.: Convergence theory in nonlinear programming. In: Integer and Nonlinear Programming, pp. 1–36 (1970)

  32. Xu, S., Freund, R.M., Sun, J.: Solution methodologies for the smallest enclosing circle problem. Comput. Optim. Appl. 25(1–3), 283–292 (2003)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Cristofari.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cristofari, A., De Santis, M., Lucidi, S. et al. An active-set algorithmic framework for non-convex optimization problems over the simplex. Comput Optim Appl 77, 57–89 (2020). https://doi.org/10.1007/s10589-020-00195-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00195-x

Keywords

Mathematics Subject Classification

Navigation