Skip to main content
Log in

An asynchronous subgradient-proximal method for solving additive convex optimization problems

  • Original Research
  • Published:
Journal of Applied Mathematics and Computing Aims and scope Submit manuscript

Abstract

In this paper, we consider additive convex optimization problems in which the objective function is the sum of a large number of convex nondifferentiable cost functions. We assume that each cost function is specifically written as the sum of two convex nondifferentiable functions in which one function is appropriate for the subgradient method, and another one is not. To this end, we propose a distributed optimization algorithm based on the subgradient and proximal methods. The proposed method is also governed by an asynchronous feature that allows time-varying delays when computing the subgradients. We prove the convergences of function values of iterates to the optimal value. To demonstrate the efficiency of the presented theoretical result, we investigate the binary classification problem via support vector machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Availability of data and materials

The datasets that support the findings of this study are available as follows: (i) The hiseq dataset is available from https://archive.ics.uci.edu/ml/datasets/gene+expression+cancer+RNA-Seq. (ii) The DrivFace dataset is available from https://archive.ics.uci.edu/ml/datasets/DrivFace. (iii) The gisette dataset is available from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#gisette.

References

  1. Aybat, N., Wang, Z., Iyengar, G.: An asynchronous distributed proximal gradient method for composite convex optimization. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 2454–2462. PMLR (2015)

  2. Aytekin, A.: Asynchronous first-order algorithms for large-scale optimization: analysis and Implementation. PhD thesis, KTH Royal Institute of Technology (2019)

  3. Bartlett, P.L., Wegkamp, M.H.: Classification with a reject option using a hinge loss. J. Mach. Learn. Res. 9(8), 1823–1840 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics, 2nd edn. Springer, New York (2017)

    Book  MATH  Google Scholar 

  5. Beck, A.: First-Ordered Methods in Optimization. SIAM, Philadelphia (2017)

    Book  MATH  Google Scholar 

  6. Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129, 163–195 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertsekas, D.P.: Convex Optimization Algorithms. Athena Scientific, Belmont (2015)

    MATH  Google Scholar 

  8. Bertsekas, D.P., Nedić, A., Ozdaglar, A.E.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

    MATH  Google Scholar 

  9. Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29–51 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathematics, vol. 2057. Springer, Berlin (2012)

    MATH  Google Scholar 

  11. Ekkarntrong, N., Arunrat, T., Nimana, N.: Convergence of a distributed method for minimizing sum of convex functions with fixed point constraints. J. Inequalities Appl. 197 (2021)

  12. Gurbuzbalaban, M., Ozdaglar, A., Parrilo, P.A.: On the convergence rate of incremental aggregated gradient algorithms. SIAM J. Optim. 27, 1035–1048 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hishinuma, K., Iiduka, H.: Incremental and parallel machine learning algorithms with automated learning rate adjustments. Front. Robot. AI 6, 77 (2019)

    Article  MATH  Google Scholar 

  14. Iiduka, H.: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23(1), 1–26 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Iiduka, H.: Distributed optimization for network resource allocation with nonsmooth utility functions. IEEE Trans. Control Netw. Syst. 6(4), 1354–1365 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67, 4494–4506 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  17. Li, X., Feng, G., Xie, L.: Distributed proximal algorithms for multi-agent optimization with coupled inequality constraints. IEEE Trans. Autom. Control 66(3), 1223–1230 (2021)

    Article  MATH  Google Scholar 

  18. Martinet, B.: Determination approchee d’un point fixe d’une application pseudocontractante. C. R. Acad. Sci. Paris 274A, 163–165 (1972)

    MATH  Google Scholar 

  19. Namsak, S., Petrot, N., Nimana, N.: A distributed proximal gradient method with time-varying delays for solving additive convex optimizations. Results Appl. Math. 18, 100370 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  20. Nedic, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.: Distributed subgradient methods and quantization effects. In: 2008 47th IEEE Conference on Decision and Control. https://doi.org/10.1109/cdc.2008.4738860

  21. Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 51(1), 48–61 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  22. Nedic, A., Ozdaglar, A., Parril, P.A.: Constraint consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)

    Article  MATH  Google Scholar 

  23. Niu, Y., Li, H., Wang, Z., Lu, Q., Xia, D., Ji, L.: A distributed stochastic proximal-gradient algorithm for composite optimization. IEEE Trans. Control Netw. Syst. 8(3), 1383–1393 (2021)

    Article  MathSciNet  Google Scholar 

  24. Notarnicola, I., Notarstefano, G.: Asynchronous distributed optimization via a randomized dual proximal gradient. IEEE Trans. Autom. Control 62, 2095–2106 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. Petrot, N., Nimana, N.: Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems. Optimization 70(5–6), 1307–1336 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ram, S.S., Nedic, A., Veeravall, V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. Tsianos, K.I., Rabbat, M.G.: Distributed dual averaging for convex optimization under communication delays. In: Proc. Amer. Control Conf., pp. 1067–1072 (2012)

  28. Tsitsiklis, J., Bertsekas, D., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31, 803–812 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  29. Vanli, N.D., Gurbuzbalaban, M., Ozdaglar, A.: Global convergence rate of proximal incremental aggregated gradient methods. SIAM J. Optim. 28, 1282–1300 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  30. Wang, H., Liao, X., Huang, T., Li, C.: Cooperative distributed optimization in multiagent networks with delays. IEEE Trans. SMC 45(2), 363–369 (2015)

    Google Scholar 

  31. Wu, T., Yuan, K., Ling, Q., Yin, W., Sayed, A.H.: Decentralized consensus optimization with asynchrony and delays. IEEE Trans. Signal Inf. Process. Netw. 4(2), 293–307 (2018)

    MathSciNet  Google Scholar 

  32. Yang, T., Mahdavi, M., Jin, R., Zhu, S.: An efficient primal-dual prox method for non-smooth optimization. J. Mach. Learn. Res. 98(3), 369–406 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the Editor and two anonymous referees for comments and remarks which improved the quality and presentation of the paper. T. Arunrat was supported by the Development and Promotion of Science and Technology Talents Project (DPST).

Funding

This work has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [Grant No. B05F650018].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nimit Nimana.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arunrat, T., Namsak, S. & Nimana, N. An asynchronous subgradient-proximal method for solving additive convex optimization problems. J. Appl. Math. Comput. 69, 3911–3936 (2023). https://doi.org/10.1007/s12190-023-01908-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12190-023-01908-1

Keywords

Mathematics Subject Classification

Navigation