Skip to main content
Log in

A unified Douglas–Rachford algorithm for generalized DC programming

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We consider a class of generalized DC (difference-of-convex functions) programming, which refers to the problem of minimizing the sum of two convex (possibly nonsmooth) functions minus one smooth convex part. To efficiently exploit the structure of the problem under consideration, in this paper, we shall introduce a unified Douglas–Rachford method in Hilbert space. As an interesting byproduct of the unified framework, we can easily show that our proposed algorithm is able to deal with convex composite optimization models. Due to the nonconvexity of DC programming, we prove that the proposed method is convergent to a critical point of the problem under some assumptions. Finally, we demonstrate numerically that our proposed algorithm performs better than the state-of-the-art DC algorithm and alternating direction method of multipliers (ADMM) for DC regularized sparse recovery problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. The Matlab codes can be downloaded from https://github.com/mingyan08/ProxL1-L2.

References

  1. Alvarado, A., Scutari, G., Pang, J.: A new decomposition method for multiuser DC-programming and its applications. IEEE Trans. Signal Process. 62, 2984–2998 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aragon Artacho, F., Borwein, J.: Global convergence of a non-convex Douglas–Rachford iteration. J. Glob. Optim. 57, 753–769 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aragon Artacho, F., Borwein, J., Tam, M.: Global behavior of the Douglas–Rachford method for a nonconvex feasibility problem. J. Glob. Optim. 65, 309–327 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Aragon Artacho, F., Vuong, P.: The boosted difference of convex functions algorithm for nonsmooth functions. SIAM J. Optim. 30, 980–1006 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  MATH  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)

    Article  MATH  Google Scholar 

  7. Candés, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Carrizosa, E., Guerrero, V., Morales, J.: Visualizing data as objects by DC (difference of convex) optimization. Math. Program. Ser. B 169, 119–140 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  9. Chen, Y., Chi, Y.: Harnessing structures in big data via guaranteed low rank matrix estimation: recent theory and fast algorithms via convex and nonconvex optimization. IEEE Signal Process. Mag. 35(4), 14–31 (2018)

    Article  Google Scholar 

  10. Combettes, P., Pesquet, J.: Proximal splitting methods in signal processing. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its ApplicationsSpringer Optimization and Its Applications, vol. 49, pp. 185–212. Springer, New York (2011)

    Chapter  MATH  Google Scholar 

  11. Dao, M., Tam, M.: A Lyapunov-type approach to convergence of the Douglas–Rachford algorithm for a nonconvex setting. J. Glob. Optim. 73, 83–112 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  12. Domingos, P.: A few useful things to know about machine learning. Commun. ACM 55, 78–87 (2012)

    Article  Google Scholar 

  13. Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. thesis, Massachusetts Institute of Technology (1989)

  14. Eckstein, J., Bertsekas, D.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  15. Guo, K., Han, D.: A note on the Douglas–Rachford splitting method for optimization problems involving hypoconvex functions. J. Glob. Optim. 72, 431–441 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Guo, K., Han, D., Yuan, X.: Convergence analysis of Douglas–Rachford splitting method for “strongly+weakly” convex programming. SIAM J. Numer. Anal. 55, 1549–1577 (2017)

  17. Han, D., He, H., Yang, H., Yuan, X.: A customized Douglas–Rachford splitting algorithm for separable convex minimization with linear constraints. Numer. Math. 127, 167–200 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Horst, R., Thoai, N.: DC programming: overview. J. Optim. Theory Appl. 103, 1–43 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  19. Jain, P., Kar, P.: Non-convex optimization for machine learning. Found. Trends Mach. Learn. 10(3–4), 142–336 (2017)

    Article  MATH  Google Scholar 

  20. Khabbazibasmenj, A., Roemer, F., Vorobyov, S., Haardt, M.: Sum-rate maximization in two-way AF MIMO relaying: polynomial time solutions to a class of DC programming problems. IEEE Trans. Signal Process. 60, 5478–5493 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  21. Le Thi, H., Pham Dinh, T.: A continuous approach for the concave cost supply problem via DC programming and DCA. Discrete Appl. Math. 156, 325–338 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Le Thi, H., Pham Dinh, T.: Feature selection in machine learning: an exact penalty approach using a difference of convex function algorithm. Mach. Learn. 101, 163–186 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Le Thi, H., Pham Dinh, T.: DC programming and DCA: thirty years of developments. Math. Program. Ser. A 169, 5–68 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  24. Le Thi, H., Tran, D.: Optimizing a multi-stage production/inventory system by DC programming based approaches. Comput. Optim. Appl. 57, 441–468 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, G.Y., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program. 159, 371–401 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  26. Li, M., Wu, Z.M.: Convergence analysis of the generalized splitting methods for a class of nonconvex optimization problems. J. Optim. Theory Appl. 183, 535–565 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  27. Liu, T., Pong, T.K., Takeda, A.: A refined convergence analysis of pDCAe with applications to simultaneous sparse recovery and outlier detection. Comput. Optim. Appl. 73, 69–100 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  28. Lou, Y., Yan, M.: Fast l1–l2 minimization via a proximal operator. J. Sci. Comput. 74, 767–785 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  29. Lou, Y., Zeng, T., Osher, S., Xin, J.: A weighted difference of anisotropic and isotropic total variation model for image processing. SIAM J. Imaging Sci. 8, 1798–1823 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Lu, Z., Zhou, Z.: Nonmonotone enhanced proximal DC algorithms for structured nonsmooth DC programming. SIAM J. Optim. 29, 2725–2752 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  31. Lu, Z., Zhou, Z., Sun, Z.: Enhanced proximal DC algorithms with extrapolation for a class of structured nonsmooth DC minimization. Math. Program. Ser. B 176, 369–401 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  32. Luke, D.R., Martins, A.: Convergence analysis of the relaxed Douglas–Rachford algorithm. SIAM J. Optim. 30, 542–584 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  33. Marino, G., Xu, H.K.: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 3, 791–808 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  34. Miao, W., Pan, S., Sun, D.: A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program. 159, 289–338 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  35. Pham Dinh, T., Le Thi, H.: Convex analysis approach to DC programming: theory, algorithms and applications. Acta Math. Vietnamica 22, 289–355 (1997)

    MathSciNet  MATH  Google Scholar 

  36. Pham Dinh, T., Souad, E.B.: Algorithms for solving a class of nonconvex optimization problems. Methods of subgradients. In: Hiriart-Urruty, J.B. (ed.) Fermat Days 85: Mathematics for Optimization. North-Holland Mathematics Studies, vol. 129, pp. 249–271. North-Holland, Amsterdam (1986)

    Chapter  Google Scholar 

  37. Piot, B., Geist, M., Pietquin, O.: Difference of convex functions programming for reinforcement learning. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2519–2527. Curran Associates, Red Hook (2017)

    Google Scholar 

  38. Sun, T., Yin, P., Cheng, L., Jiang, H.: Alternating direction method of multipliers with difference of convex functions. Adv. Comput. Math. 44, 723–744 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  39. Ta, M., Le Thi, H., Boudjeloud-Assala, L.: Clustering data stream by a sub-window approach using DCA. In: Perner, P. (ed.) Machine Learning and Data Mining in Pattern Recognition, pp. 279–292. Springer, Berlin (2012)

    Chapter  Google Scholar 

  40. Themelis, A., Patrinos, P.: Douglas–Rachford splitting and ADMM for nonconvex optimization tight convergence results. SIAM J. Optim. 30, 149–181 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  41. Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-conex algorithm with extrapolation. Comput. Optim. Appl. 69, 297–324 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  42. Yin, P., Lou, Y., He, Q., Xin, J.: Minimization of \(\ell _{1-2}\) for compressed sensing. SIAM J. Sci. Comput. 37, A536–A563 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  43. Zhang, F., Yang, Z., Chen, Y., Yang, J., Yang, G.: Matrix completion via capped nuclear norm. IET Image Process. 12, 959–966 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees for their close reading and valuable comments, which led to great improvements of the paper, especially for one referee bringing our attention to the relevant references [26, 27, 41]. H. He was supported in part by Zhejiang Provincial Natural Science Foundation of China at Grant No. LY20A010018 and National Natural Science Foundation of China (No. 11771113).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongjin He.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chuang, CS., He, H. & Zhang, Z. A unified Douglas–Rachford algorithm for generalized DC programming. J Glob Optim 82, 331–349 (2022). https://doi.org/10.1007/s10898-021-01079-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-021-01079-y

Keywords

Navigation