Skip to main content
Log in

A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems

  • Published:
Journal of the Operations Research Society of China Aims and scope Submit manuscript

Abstract

In this paper, we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems, which arise in many contemporary statistical and signal processing applications. The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method. It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function. Some numerical experiments have been conducted to evaluate the proposed method eventually.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  2. Bakin, S.: Adaptive regression and model selection in data mining problems. PhD thesis, Australian National University, Canberra (1999)

  3. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 68(1), 49–67 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ma, S., Song, X., Huang, J.: Supervised group Lasso with applications to microarray data analysis. BMC Bioinform. 8(1), 60 (2007)

    Article  Google Scholar 

  5. Bach, F.: Consistency of the group Lasso and multiple kernel learning. J. Mach. Learn. Res. 9, 1179–1225 (2009)

    MathSciNet  MATH  Google Scholar 

  6. Kim, D., Sra, S., Dhillon, I.: A scalable trust-region algorithm with application to mixednorm regression. In: International Conference on Machine Learning, vol. 1 (2010)

  7. Liu, J., Ji, S., Ye, J.: SLEP: Sparse Learning with Efficient Projections. Arizona State University, Tempe (2009)

    Google Scholar 

  8. Roth, V., Fischer, B.: The group-Lasso for generalized linear models: uniqueness of solutions and efficient algorithms. In: Proceedings of the 25th International Conference on Machine Learning, pp. 848–855 (2008)

  9. Van den Berg, E., Schmidt, M., Friedlander, M., Murphy, K.: Group sparsity via linear-time projection. Technical report TR-2008-09. University of British Columbia, Department of Computer Science (2008)

  10. Wright, S., Nowak, R., Figueiredo, M.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Proces. 57(7), 2479–2493 (2009)

    Article  MathSciNet  Google Scholar 

  11. Friedman, J., Hastie, T., Tibshirani, R.: A note on the group Lasso and a sparse group Lasso. arXiv:1001.0736v1 [math.ST] (2010)

  12. Vincent, M., Hansen, N.R.: Sparse group lasso and high dimensional multinomial classification. arXiv:1205.1245v1 [stat.ML] (2012)

  13. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, New York (1998)

    Book  MATH  Google Scholar 

  14. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  15. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer, Boston (2004)

    Book  MATH  Google Scholar 

  17. Luo, Z.Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30(2), 408–425 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  18. Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125(2), 263–295 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhang, H.B., Jiang, J., Luo, Z.Q.: On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems. J. Oper. Res. Soc. China 1(2), 163–186 (2013)

    Article  MATH  Google Scholar 

  20. Zhang, H.H.B., Wei, J., Li, M., et al.: On proximal gradient method for the convex problems regularized with the group reproducing kernel norm. J. Glob. Optim. 58(1), 169–188 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. He, B., Yuan, X.: Forward–backward-based descent methods for composite variational inequalities. Optim. Methods Softw. 28(4), 706–724 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Hiriart-Urruty, J.-B., Lemarechal, C.: Fundamentals of Convex Analysis. Grundlehren Text Editions. Springer, Berlin (2001)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying-Yi Li.

Additional information

This work is supported by the National Natural Science Foundation of China (No. 61179033).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, YY., Zhang, HB. & Li, F. A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems. J. Oper. Res. Soc. China 5, 391–403 (2017). https://doi.org/10.1007/s40305-017-0155-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40305-017-0155-5

Keywords

Mathematics Subject Classification

Navigation