Skip to main content
Log in

A class of accelerated GADMM-based method for multi-block nonconvex optimization problems

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

This article has been updated

Abstract

To improve the computational efficiency, based on the generalized alternating direction method of multipliers (GADMM), we consider a class of accelerated method for solving multi-block nonconvex and nonsmooth optimization problems. First, we linearize the smooth part of the objective function and add proximal terms in subproblems, resulting in the proximal linearized GADMM. Then, we introduce an inertial technique and give the inertial proximal linearized GADMM. The convergence of the regularized augmented Lagrangian function sequence is proved under some appropriate assumptions. When some component functions of the objective function are convex, we use the error bound condition and obtain that the sequences generated by the algorithms locally converge to the critical point in a R-linear rate. Moreover, we apply the proposed algorithms to SCAD and robust PCA problems to verify the efficiency of the algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Availability of supporting data

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Change history

  • 22 April 2024

    The correct name of the city in the 2nd affiliation is Xuzhou

References

  1. Huang, F., Chen, S., Lu, Z.: Stochastic alternating direction method of multipliers with variance reduction for nonconvex optimization. (2017) https://doi.org/10.48550/arXiv.1610.02758

  2. Donoho, D.L.: Compressed sensing. IEEE T. Inform. Theor. 52(4), 1289–1306 (2006). https://doi.org/10.1109/TIT.2006.871582

    Article  Google Scholar 

  3. Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3) (2011). https://doi.org/10.1145/1970392.1970395

  4. Jian, J., Zhang, C., Yin, J., Yang, L., Ma, G.: Monotone splitting sequential quadratic optimization algorithm with applications in electric power systems. J. Optim. Theory Appl. 186, 226–247 (2020). https://doi.org/10.1007/s10957-020-01697-8

    Article  MathSciNet  Google Scholar 

  5. Yang, L., Luo, J., Xu, Y., Zhang, Z., Dong, Z.: A distributed dual consensus ADMM based on partition for DC-DOPF with carbon emission trading. IEEE T. Ind. Inform. 16(3), 1858–1872 (2020). https://doi.org/10.1109/TII.2019.2937513

    Article  Google Scholar 

  6. Yang, L., Yang, Y., Chen, G., Dong, Z.: Distributionally robust framework and its approximations based on vector and region split for self-scheduling of generation companies. IEEE T. Ind. Inform. 18(8), 5231–5241 (2022). https://doi.org/10.1109/TII.2021.3125964

    Article  Google Scholar 

  7. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9(R2), 41–76 (1975)

  8. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976). https://doi.org/10.1016/0898-1221(76)90003-1

    Article  Google Scholar 

  9. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992). https://doi.org/10.1007/BF01581204

    Article  MathSciNet  Google Scholar 

  10. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976). https://doi.org/10.1137/0314056

    Article  MathSciNet  Google Scholar 

  11. Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Method. Softw. 4(1), 75–83 (1994). https://doi.org/10.1080/10556789408805578

    Article  MathSciNet  Google Scholar 

  12. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trend. Mach. Learn. 3(1), 1–122 (2011). https://doi.org/10.1561/2200000016

    Article  Google Scholar 

  13. Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155, 227–238 (2012). https://doi.org/10.1007/s10957-012-0003-z

    Article  MathSciNet  Google Scholar 

  14. Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multiblock variables. SIAM J. Optim. 25(3), 1478–1497 (2015). https://doi.org/10.1137/140971178

    Article  MathSciNet  Google Scholar 

  15. \(\cal O\it (1/t)\) complexity analysis of the generalized alternating direction method of multipliers. Sci. China Math. 62, 795–808 (2019). https://doi.org/10.1007/s11425-016-9184-4

  16. Bai, J., Hager, W.W., Zhang, H.: An inexact accelerated stochastic ADMM for separable convex optimization. Comput. Optim. Appl. 81, 479–518 (2022). https://doi.org/10.1007/s10589-021-00338-8

    Article  MathSciNet  Google Scholar 

  17. Adona, V.A., Gonalves, M.L.N.: An inexact version of the symmetric proximal ADMM for solving separable convex optimization. Numer. Algor. (2023). https://doi.org/10.1007/s11075-022-01491-9

    Article  MathSciNet  Google Scholar 

  18. Wang, F., Xu, Z., Xu, H.: Convergence of Bregman alternating direction method with multipliers for nonconvex composite problems. (2014) https://doi.org/10.48550/arXiv.1410.8625

  19. Guo, K., Han, D., Wu, T.: Convergence of alternating direction method for minimizing sum of two nonconvex functions with linear constraints. Int. J. Comput. Math. 94(8), 1653–1669 (2017). https://doi.org/10.1080/00207160.2016.1227432

    Article  MathSciNet  Google Scholar 

  20. Yashtini, M.: Convergence and rate analysis of a proximal linearized ADMM for nonconvex nonsmooth optimization. J. Global Optim. 84(4), 913–939 (2022). https://doi.org/10.1007/s10898-022-01174-8

    Article  MathSciNet  Google Scholar 

  21. Zhang, J., Luo, Z.: A proximal alternating direction method of multiplier for linearly constrained nonconvex minimization. SIAM J. Optim. 30(3), 2272–2302 (2020). https://doi.org/10.1137/19M1242276

    Article  MathSciNet  Google Scholar 

  22. Gao, X., Cai, X., Han, D.: A Gauss-Seidel type inertial proximal alternating linearized minimization for a class of nonconvex optimization problems. J. Global Optim. 76, 863–887 (2020). https://doi.org/10.1007/s10898-019-00819-5

    Article  MathSciNet  Google Scholar 

  23. Chao, M., Zhang, Y., Jian, J.: An inertial proximal alternating direction method of multipliers for nonconvex optimization. Int. J. Comput. Math. 98(6), 1199–1217 (2021). https://doi.org/10.1080/00207160.2020.1812585

    Article  MathSciNet  Google Scholar 

  24. Jia, Z., Gao, X., Cai, X., Han, D.: Local linear convergence of the alternating direction method of multipliers for nonconvex separable optimization problems. J. Optim. Theory Appl. 188, 1–25 (2021). https://doi.org/10.1007/s10957-020-01782-y

    Article  MathSciNet  Google Scholar 

  25. Jian, J., Liu, P., Jiang, X.: A partially symmetric regularized alternating direction method of multipliers for nonconvex multi-block optimization (in Chinese). Acta Math. Sin. Chin. Ser. 64(6), 1005–1026 (2021)

    Google Scholar 

  26. Liu, P., Jian, J., Ma, G.: A Bregman-style partially symmetric alternating direction method of multipliers for nonconvex multi-block optimization. Acta Math. Appl. Sin. Engl. Ser. 39(2), 354–380 (2023). https://doi.org/10.1007/s10255-023-1048-5

    Article  MathSciNet  Google Scholar 

  27. Liu, P., Shao, H., Wang, Y., Wu, X.: Local linear convergence rate analysis of a symmetric ADMM with relaxation-step for nonconvex optimization (in Chinese). J. Systems Sci. Math. Sci. 43(1), 78–93 (2023)

    Google Scholar 

  28. Fang, E.X., He, B., Liu, H., Yuan, X.: Generalized alternating direction method of multipliers: new theoretical insights and applications. Math. Prog. Comp. 7(2), 149–187 (2015). https://doi.org/10.1007/s12532-015-0078-2

  29. Chen, C., Ma, S., Yang, J.: A general inertial proximal point method for mixed variational inequality problem. (2014) https://doi.org/10.48550/arXiv.1407.8238

  30. Ouyang, Y., Chen, Y., Lan, G., Pasiliao, E.: An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015). https://doi.org/10.1137/14095697X

    Article  MathSciNet  Google Scholar 

  31. He, B., Ma, F., Yuan, X.: Optimally linearizing the alternating direction method of multipliers for convex programming. Comput. Optim. Appl. 75(2), 361–388 (2020). https://doi.org/10.1007/s10589-019-00152-3

    Article  MathSciNet  Google Scholar 

  32. Ochs, P., Chen, Y., Brox, T., Pock, T.: ipiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 7(2), 1388–1419 (2014). https://doi.org/10.1137/130942954

    Article  MathSciNet  Google Scholar 

  33. Ochs, P., Brox, T., Pock, T.: ipiasco: Inertial proximal algorithm for strongly convex optimization. J. Math. Imaging Vis. 53, 171–181 (2015). https://doi.org/10.1007/s10851-015-0565-0

    Article  MathSciNet  Google Scholar 

  34. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964). https://doi.org/10.1016/0041-5553(64)90137-5

    Article  Google Scholar 

  35. Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000). https://doi.org/10.1137/S0363012998335802

    Article  MathSciNet  Google Scholar 

  36. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001). https://doi.org/10.1023/A:1011253113155

    Article  MathSciNet  Google Scholar 

  37. Bot, R.I., Csetnek, E.R.: An inertial alternating direction method of multipliers. (2014) https://doi.org/10.48550/arXiv.1404.4582

  38. Wang, X., Shao, H., Liu, P., Wu, T.: An inertial proximal partially symmetric admm-based algorithm for linearly constrained multi-block nonconvex optimization problems with applications. J. Comput. Appl. Math. 420, 114821 (2023). https://doi.org/10.1016/j.cam.2022.114821

    Article  MathSciNet  Google Scholar 

  39. He, B., Tao, M., Yuan, X.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22(2), 313–340 (2012). https://doi.org/10.1137/110822347

    Article  MathSciNet  Google Scholar 

  40. He, B., Tao, M., Xu, M., Yuan, X.: An alternating direction-based contraction method for linearly constrained separable convex programming problems. Optimization 62(4), 573–596 (2013). https://doi.org/10.1080/02331934.2011.611885

    Article  MathSciNet  Google Scholar 

  41. Han, D., Yuan, X., Zhang, W.: An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing. Math. Comp. 83, 2263–2291 (2014). https://doi.org/10.1090/S0025-5718-2014-02829-9

    Article  MathSciNet  Google Scholar 

  42. Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of admm for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155(1–2), 57–79 (2016). https://doi.org/10.1007/s10107-014-0826-5

    Article  MathSciNet  Google Scholar 

  43. Chen, C.: Some notes on the divergence example for multi-block alternating direction method of multipliers (in Chinese). Oper. Res. Trans 23(3), 135–140 (2019)

    MathSciNet  Google Scholar 

  44. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semialgebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137, 91–129 (2013). https://doi.org/10.1007/s10107-011-0484-9

    Article  MathSciNet  Google Scholar 

  45. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998)

    Book  Google Scholar 

  46. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trend. Optim. 1(3), 127–239 (2014). https://doi.org/10.1561/2400000003

    Article  Google Scholar 

  47. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, Boston (2003)

    Google Scholar 

  48. Goncalves, M.L.N., Melo, J.G., Monteiro, R.D.C.: Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems. Pac. J. Optim. 15(3), 379–398 (2019). https://doi.org/10.48550/arXiv.1702.01850

  49. Wang, F., Cao, W., Xu, Z.: Convergence of multi-block Bregman ADMM for nonconvex composite problems. Sci. China Inf. Sci. 61, 1–12 (2018). https://doi.org/10.1007/s11432-017-9367-6

    Article  MathSciNet  Google Scholar 

  50. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001). https://doi.org/10.1198/016214501753382273

    Article  MathSciNet  Google Scholar 

  51. Fan, J.: Comments on «wavelets in statistics: a review» by A. Antoniadis. J. Ital. Statist. Soc. 6(2), 131 (1997). https://doi.org/10.1007/BF03178906

  52. Wu, Z., Li, M.: General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems. Comput. Optim. Appl. 73, 129–158 (2019). https://doi.org/10.1007/s10589-019-00073-1

    Article  MathSciNet  Google Scholar 

  53. Xu, J., Chao, M.: An inertial bregman generalized alternating direction method of multipliers for nonconvex optimization. J. Appl. Math. Comput. 68, 1–27 (2021). https://doi.org/10.1007/s12190-021-01590-1

    Article  MathSciNet  Google Scholar 

  54. \({L}_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE T. Neur. Net. Lear. 23(7), 1013–1027 (2012). https://doi.org/10.1109/TNNLS.2012.2197412

  55. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 57(11), 1413–1457 (2004). https://doi.org/10.1002/cpa.20042

    Article  MathSciNet  Google Scholar 

Download references

Funding

The work described in this paper was jointly supported by grants from the National Natural Science Foundation of China (Nos. 72071202, 71971108), Top Six Talents’ Project of Jiangsu Province (No. XNYQC-001), Mathematics Tianyuan Fund of the National Natural Sciences Foundation of China (No. 12326321), Key Laboratory of Mathematics and Engineering Applications, Ministry of Education and Jiangsu National Applied Mathematics Center.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript. K.Z. is mainly responsible for theoretical analysis and numerical experiments; H.S. and T.W. mainly contribute to algorithm design and theoretical analysis; X.W. is mainly contributing to algorithm design and numerical experiments.

Corresponding authors

Correspondence to Hu Shao or Ting Wu.

Ethics declarations

Ethics approval

Not applicable.

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. The proof of (34)

Appendix. The proof of (34)

Here we give the proof of the inequality (34).

Proof

Let \({\bar{w}}^{k+1}=({\bar{x}}^{k+1},{\bar{y}}^{k+1},{\bar{\lambda }}^{k+1})\in \text {crit}\mathcal {L}_{\beta }\), then

$$\begin{aligned} \sum \limits _{i=1}^{n}{{{A}_{i}}{\bar{x}_{i}}^{k+1}}+B{\bar{y}^{k+1}}-b=0. \end{aligned}$$

By the definition of \(\mathcal {L}_\beta (\cdot )\) (2), it holds that

$$\begin{aligned} \begin{aligned}&\mathcal {L}_\beta \left( w^{k+1}\right) -\mathcal {L}_\beta \left( \bar{w}^{k+1}\right) \\ =&\left[ \sum \limits _{i=1}^{n}{{{f}_{i}}({x}_{i}^{k+1})}+g(y^{k+1})-\left\langle \lambda ^{k+1} ,\sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\rangle \right. \\&\left. +\frac{\beta }{2}{{\left\| \sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\| }^{2}}\right] \\&-\left[ \sum \limits _{i=1}^{n}{{f}_{i}}({\bar{x}_{i}^{k+1}})+g(\bar{y}^{k+1})-\left\langle \bar{\lambda }^{k+1} ,\sum \limits _{i=1}^{n}{{{A}_{i}}{\bar{x}_{i}^{k+1}}}+B\bar{y}^{k+1}-b \right\rangle \right. \\&\left. +\frac{\beta }{2}{{\left\| \sum \limits _{i=1}^{n}{{{A}_{i}}{\bar{x}_{i}^{k+1}}}+B\bar{y}^{k+1}-b \right\| }^{2}}\right] . \end{aligned} \end{aligned}$$
(A1)

From the convexity of \(f_i\), we get

$$\begin{aligned} \begin{aligned} {{{f}_{i}}({x}_{i}^{k+1})}-{{f}_{i}}({\bar{x}_{i}^{k+1}})\le&\left\langle {A}_{i}^{\top }{{\lambda }^{k}}-\beta {A}_{i}^{\top }(\sum \limits _{j=1}^{i}{{{A}_{j}}{x}_{j}^{k+1}+}\sum \limits _{j=i+1}^{n}{{{A}_{j}}{x}_{j}^{k}+}B{{y}^{k}}-b)\right. \\&\left. -F_i\Delta x_i^{k+1}, x_i^{k+1}-\bar{x}_i^{k+1}\right\rangle . \end{aligned} \end{aligned}$$
(A2)

And from the Lipschitz continuity of \(\nabla g\) and \(B^{\top }\lambda ^{k+1}=\nabla g(y^{k})+Q\Delta y^{k+1}\), we have

$$\begin{aligned} \begin{aligned} g(y^{k+1})-g(\bar{y}^{k+1})\le&\left\langle \nabla g(y^{k+1})-\nabla g(y^{k})+B^{\top }\lambda ^{k+1}\right. \\&\left. -Q\Delta y^{k+1}, y^{k+1}-\bar{y}^{k+1}\right\rangle +\frac{L_g}{2}{\left\| y^{k+1}-\bar{y}^{k+1} \right\| }^2. \end{aligned} \end{aligned}$$
(A3)

Combining (A1), (A2) and (A3), it follows that

$$\begin{aligned}{} & {} \mathcal {L}_\beta \left( w^{k+1}\right) -\mathcal {L}_\beta \left( \bar{w}^{k+1}\right) \\ \le{} & {} \sum \limits _{i=1}^n \left\langle \lambda ^k,A_i(x_i^{k+1}-\bar{x}_i^{k+1})\right\rangle -\sum \limits _{i=1}^n\left\langle F_i\Delta x_i^{k+1},x_i^{k+1}-\bar{x}_i^{k+1}\right\rangle \\{} & {} -\beta \sum \limits _{i=1}^n \left\langle \sum \limits _{j=1}^n A_jx_j^{k+1}+By^k-b-\sum \limits _{j=i+1}^nA_j\Delta x_j^{k+1},A_i(x_i^{k+1}-\bar{x}_i^{k+1})\right\rangle \\{} & {} +\left\langle \nabla g(y^{k+1})-\nabla g(y^{k}),y^{k+1}-\bar{y}^{k+1}\right\rangle -\left\langle Q\Delta y^{k+1},y^{k+1}-\bar{y}^{k+1}\right\rangle \\{} & {} +\frac{L_g}{2}{\left\| y^{k+1}-\bar{y}^{k+1} \right\| }^2+\frac{\beta }{2}{{\left\| \sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\| }^{2}}\\{} & {} \underbrace{+\left\langle B^{\top }\lambda ^{k+1},y^{k+1}-\bar{y}^{k+1}\right\rangle -\left\langle \lambda ^{k+1} ,\sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\rangle }\\ ={} & {} -\left\langle \lambda ^{k+1} ,\sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b-By^{k+1}+B\bar{y}^{k+1} \right\rangle \\{} & {} +\left\langle \lambda ^k,\sum \limits _{i=1}^nA_i(x_i^{k+1}-\bar{x}_i^{k+1})\right\rangle -\sum \limits _{i=1}^n\left\langle F_i\Delta x_i^{k+1},x_i^{k+1}-\bar{x}_i^{k+1}\right\rangle \\{} & {} -\beta \sum \limits _{i=1}^n \left\langle \sum \limits _{j=1}^n A_jx_j^{k+1}+By^k-b-\sum \limits _{j=i+1}^nA_j\Delta x_j^{k+1},A_i(x_i^{k+1}-\bar{x}_i^{k+1})\right\rangle \\{} & {} +\left\langle \nabla g(y^{k+1})-\nabla g(y^{k}),y^{k+1}-\bar{y}^{k+1}\right\rangle \\{} & {} -\left\langle Q\Delta y^{k+1},y^{k+1}-\bar{y}^{k+1}\right\rangle +\frac{L_g}{2}{\left\| y^{k+1}-\bar{y}^{k+1} \right\| }^2\\{} & {} +\frac{\beta }{2}{{\left\| \sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\| }^{2}}. \end{aligned}$$

Considering that \(\sum \nolimits _{i=1}^{n}{{{A}_{i}}{\bar{x}_{i}}^{k+1}}+B{\bar{y}^{k+1}}-b=0\) and \({{\lambda }^{k+1}}={{\lambda }^{k}}-\beta (\sum \limits _{i=1}^{n}{{{A}_{i}}}{x}_{i}^{k+1}+B{{y}^{k+1}}-b)+\beta (1-s)(\sum \limits _{i=1}^{n}{{{A}_{i}}}{x}_{i}^{k+1}+B{{y}^{k}}-b)\), we have

$$\begin{aligned}{} & {} \mathcal {L}_\beta \left( w^{k+1}\right) -\mathcal {L}_\beta \left( \bar{w}^{k+1}\right) \nonumber \\{} & {} \!=\left\langle \lambda ^k-\lambda ^{k+1},\!\sum \limits _{i=1}^n A_i(x_i^{k+1}\!-\!\bar{x}_i^{k+1})\right\rangle \!+\!\beta \sum \limits _{i=1}^n\left\langle \!\sum \limits _{j=i+1}^nA_j\Delta x_j^{k+1}, A_i(x_i^{k+1}\!-\!\bar{x}_i^{k+1})\right\rangle \nonumber \\{} & {} -\frac{1}{1-s}\left\langle \lambda ^{k+1}-\lambda ^{k}+\beta (\sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b),\sum \limits _{i=1}^n A_i(x_i^{k+1}-\bar{x}_i^{k+1})\right\rangle \nonumber \\{} & {} -\sum \limits _{i=1}^n\left\langle F_i\Delta x_i^{k+1},x_i^{k+1}-\bar{x}_i^{k+1}\right\rangle +\left\langle \nabla g(y^{k+1})-\nabla g(y^{k}),y^{k+1}-\bar{y}^{k+1}\right\rangle \nonumber \\{} & {} -\left\langle Q\Delta y^{k+1},y^{k+1}-\bar{y}^{k+1}\right\rangle +\frac{L_g}{2}{\left\| y^{k+1}-\bar{y}^{k+1} \right\| }^2\nonumber \\{} & {} +\frac{\beta }{2}{{\left\| \sum \limits _{i=1}^{n}{{{A}_{i}}{{x}_{i}^{k+1}}}+By^{k+1}-b \right\| }^{2}}. \end{aligned}$$
(A4)

Further, from \(s\beta \sum \nolimits _{i=1}^nA_ix_i^{k+1}=\lambda ^k-\lambda ^{k+1}-\beta (By^{k+1}-b)+\beta (1-s)(By^k-b)\), we obtain following two equalities

$$\begin{aligned}&\sum \limits _{i=1}^{n}A_ix_i^{k+1}-\sum \limits _{i=1}^{n}A_i\bar{x}_i^{k+1}\\&=\frac{1}{s\beta }(\lambda ^k-\lambda ^{k+1})-\frac{1}{s}(By^{k+1}-B\bar{y}^{k+1})+\frac{1-s}{s}(By^k-B\bar{y}^{k+1}), \end{aligned}$$

and

$$\begin{aligned} \sum \limits _{i=1}^{n}A_ix_i^{k+1}+By^{k+1}-b=\frac{1}{s\beta }(\lambda ^k-\lambda ^{k+1})-\frac{1-s}{s}(By^{k+1}-By^k). \end{aligned}$$

Substituting the above two equations into (A4), then we obtain (34).\(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, K., Shao, H., Wu, T. et al. A class of accelerated GADMM-based method for multi-block nonconvex optimization problems. Numer Algor (2024). https://doi.org/10.1007/s11075-024-01821-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11075-024-01821-z

Keywords

Navigation