Skip to main content
Log in

Solving nonnegative sparsity-constrained optimization via DC quadratic-piecewise-linear approximations

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this paper, we propose a novel algorithm that is based on quadratic-piecewise-linear approximations of DC functions to solve nonnegative sparsity-constrained optimization. A penalized DC (difference of two convex functions) formulation is proved to be equivalent to the original problem under a suitable penalty parameter. We employ quadratic-piecewise-linear approximations to the two parts of the DC objective function, resulting in a nonconvex subproblem. This is the key ingredient of our main algorithm. This nonconvex subproblem can be solved by a globally convergent alternating variable algorithm. Under some mild conditions, we prove that the proposed main algorithm for the penalized problem is globally convergent. Some preliminary numerical results on the sparse nonnegative least squares and logistic regression problems demonstrate the efficiency of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Ahn, M., Pang, J.S., Xin, J.: Difference-of-convex learning: directional stationarity, optimality, and sparsity. SIAM J. Optim. 27(3), 1637–1665 (2017)

    Article  MathSciNet  Google Scholar 

  2. Bagirov, A.M., Karmitsa, N., Mäkelä, M.M.: Introduction to Nonsmooth Optimization: Theory, Practice, and Software. Springer, Berlin (2014)

    Book  Google Scholar 

  3. Bardsley, J., Nagy, J.: Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging. SIAM J. Matrix Anal. Appl. 27(4), 1184–1197 (2006)

    Article  MathSciNet  Google Scholar 

  4. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  5. Bennett, K.P., Mangasarian, O.L.: Bilinear separation of two sets in \(n\)-space. Comput. Optim. Appl. 2, 207–227 (1993)

    Article  MathSciNet  Google Scholar 

  6. Bertsimas, D., King, A., Mazumder, R.: Best subset selection via a modern optimization lens. Ann. Stat. 44(2), 818–852 (2016)

    Article  MathSciNet  Google Scholar 

  7. Bradley, P.S., Mangasarian, O.L., Street, W.N.: Clustering via concave minimization. In: Proceedings of the 9th International Conference on Neural Information Processing Systems, pp. 368–374 (1996)

  8. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  9. Dolan, E.D., Moré, J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  10. Ferrera, J.: An Introduction to Nonsmooth Analysis. Academic Press, New York (2014)

    MATH  Google Scholar 

  11. Gaudioso, M., Giallombardo, G., Miglionico, G.: Minimizing piecewise-concave functions over polyhedra. Math. Oper. Res. 43(2), 347–692 (2018)

    Article  MathSciNet  Google Scholar 

  12. Gaudioso, M., Giallombardo, G., Miglionico, G., et al.: Minimizing nonsmooth DC functions via successive DC piecewise-affine approximations. J. Glob. Optim. 71(1), 37–55 (2018)

    Article  MathSciNet  Google Scholar 

  13. Gaudioso, M., Gorgone, E., Hiriart-Urruty, J.-B.: Feature selection in SVM via polyhedral k-norm. Optim. Lett. 14(1), 19–36 (2020)

    Article  MathSciNet  Google Scholar 

  14. Gotoh, J., Takeda, A., Tono, K.: DC formulations and algorithms for sparse optimization problems. Math. Program. 169(1), 141–176 (2018)

    Article  MathSciNet  Google Scholar 

  15. Gulpinar, N., Le Thi, H.A., Moeini, M.: Robust investment strategies with discrete assets choice constraints using DC programming. Optimization 59(1), 45–62 (2010)

    Article  MathSciNet  Google Scholar 

  16. Gao, Y., Sun, D.F.: A majorized penalty approach for calibrating rank constrained correlation matrix problems. Technical report, National University of Singapore, Singapore (2010)

  17. Itoh, Y., Duarte, M.F., Parente, M.: Perfect recovery conditions for nonnegative sparse modeling. IEEE Trans. Signal Process. 65(1), 69–80 (2017)

    Article  MathSciNet  Google Scholar 

  18. Le Thi, H.A., Dinh, T.P.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. J. Glob. Optim. 133(1), 23–46 (2005)

    MathSciNet  MATH  Google Scholar 

  19. Le Thi, H.A., Dinh, T.P., Le, H.M., Vo, X.T.: DC approximation approaches for sparse optimization. Eur. J. Oper. Res. 244(1), 26–46 (2015)

    Article  MathSciNet  Google Scholar 

  20. Liu, T., Pong, T.K., Takeda, A.: A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problem. Math. Program. 176, 339–367 (2018)

    Article  MathSciNet  Google Scholar 

  21. Lu, Z.S., Zhou, Z.R.: Nonmonotone enhanced proximal DC algorithms for a class of structured nonsmooth DC programming. SIAM J. Optim. 29, 2725–2752 (2019)

    Article  MathSciNet  Google Scholar 

  22. Lu, Z.S., Zhou, Z.R., Sun, Z.: Enhanced proximal DC algorithms with extrapolation for a class of structured nonsmooth DC minimization. Math. Program. 176, 369–401 (2019)

    Article  MathSciNet  Google Scholar 

  23. Natarajan, B.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  Google Scholar 

  24. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2004)

    Book  Google Scholar 

  25. Pant, J.K., Lu, W.S., Antoniou, A.: New improved algorithms for compressive sensing based on \(l_p\) norm. IEEE Trans. Circuits Syst. Express Briefs 61(3), 198–202 (2014)

    Article  Google Scholar 

  26. Pham Dinh, T., Le Thi, H.A.: Recent advances in DC programming and DCA. Trans. Comput. Intell. 8342, 1–37 (2014)

    Article  Google Scholar 

  27. Plumbley, M.D.: Algorithms for nonnegative independent component analysis. IEEE Trans. Neural Netw. 14(3), 534–43 (2003)

    Article  Google Scholar 

  28. Slawski, M., Hein, M.: Sparse recovery for protein mass spectrometry data. In: Practical Applications of Sparse Modeling, vol. 5, pp. 79-98. MIT Press, Cambridge (2010)

  29. Slawski, M., Hein, M.: Non-negative least squares for high dimensional linear models: consistency and sparse recovery without regularization. Electron. J. Stat. 7, 3004–3056 (2013)

    Article  MathSciNet  Google Scholar 

  30. Sun, Y., Chen, H., Tao, J.: Sparse signal recovery via minimax-concave penalty and \(l_1\)-norm loss function. IET Signal Proc. 12(9), 1091–1098 (2018)

    Article  Google Scholar 

  31. Sun, Y.L., Tao, J.X.: Image reconstruction from few views by \(l_0\)-norm optimization. Chin. Phys. B 23(7), (2014)

    Article  Google Scholar 

  32. Sun, Y., Tao, J.: Few views image reconstruction using alternating direction method via \(l_0\)-norm minimization. Int. J. Imaging Syst. Technol. 24(3), 215–223 (2014)

    Article  Google Scholar 

  33. Takeda, A., Niranjan, M., Gotoh, J., Kawahara, Y.: Simultaneous pursuit of out-of-sample performance and sparsity in index tracking portfolios. CMS 10(1), 21–49 (2013)

    Article  MathSciNet  Google Scholar 

  34. Thiao, M., Dinh, T.P., Le Thi, H.A.: A DC programming approach for sparse eigenvalue problem. In: Proceedings of the 27th International Conference on Machine Learning, pp. 1063–1070 (2010)

  35. Tono, K., Takeda, A., Gotoh, J.: Efficient DC algorithms for sparse optimization. arXiv: 1701.08498 (2017)

  36. Watson, G.A.: Linear best approximation using a class of polyhedral norms. Numer. Algorithms 2(3), 321–335 (1992)

    Article  MathSciNet  Google Scholar 

  37. Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Comput. Optim. Appl. 69(2), 297–324 (2018)

    Article  MathSciNet  Google Scholar 

  38. Wen, F., Pei, L., Yang, Y., et al.: Efficient and robust recovery of sparse signal and image using generalized nonconvex regularizations. IEEE Trans. Comput. Imaging 3(4), 566–579 (2017)

    Article  MathSciNet  Google Scholar 

  39. Wu, L., Yang, Y., Liu, H.: Nonnegative-lasso and application in index tracking. Comput. Stat. Data Anal. 70, 116–126 (2014)

    Article  MathSciNet  Google Scholar 

  40. Xu, Z., Chang, X., Xu, F., et al.: \(l_{\frac{1}{2}}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)

    Article  Google Scholar 

  41. Yin, P., Xin, J.: Iterative \(l_1\) minimization for non-convex compressed sensing. J. Comput. Math. 35(4), 437–449 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

We are very grateful to Dr. Tuell Green and Sharla Green for their editorial suggestions to improve the written quality of the paper. We also would like to thank Editor and the two anonymous referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chungen Shen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, C., Liu, X. Solving nonnegative sparsity-constrained optimization via DC quadratic-piecewise-linear approximations. J Glob Optim 81, 1019–1055 (2021). https://doi.org/10.1007/s10898-021-01028-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-021-01028-9

Keywords

Mathematics Subject Classification

Navigation