Skip to main content
Log in

Analysis of the Maximum Magnification by the Scaled Memoryless DFP Updating Formula with Application to Compressive Sensing

  • Published:
Mediterranean Journal of Mathematics Aims and scope Submit manuscript

Abstract

Undesirable effects of the direction of the maximum magnification by the scaled memoryless DFP updating formula are studied. Then, with the aim of defeating such effects, a modified scaling parameter for the memoryless DFP method is obtained. Convergence analysis of the modified method is concisely brought up as well. Eventually, the performance of the method is numerically examined on some standard test problems as well as the well-known compressive sensing problem, for which also a smooth relaxation of the \(\ell _1\)-norm regularization term is proposed. Results illustrate the computational efficiency of the given method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Aminifard, Z., Babaie-Kafaki, S.: Matrix analyses on the Dai-Liao conjugate gradient method. ANZIAM J. 61, 195–203 (2019)

    MathSciNet  MATH  Google Scholar 

  2. Aminifard, Z., Babaie-Kafaki, S.: A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions. Calcolo 56(2), 1–11 (2019)

    Article  Google Scholar 

  3. Aminifard, Z., Babaie-Kafaki, S.: An optimal parameter choice for the Dai–Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix. 4OR 17(3), 317–330 (2019)

  4. Aminifard, Z., Babaie-Kafaki, S.: A restart scheme for the Dai-Liao conjugate gradient method by ignoring a direction of maximum magnification by the search direction matrix. RAIRO-Oper. Res. 54(4), 981–991 (2020)

    Article  MathSciNet  Google Scholar 

  5. Aminifard, Z., Babaie-Kafaki, S., Ghafoori, S.: An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing. Appl. Numer. Math. 67, 187–201 (2021)

    Article  MathSciNet  Google Scholar 

  6. Babaie-Kafaki, S.: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361–374 (2013)

    Article  MathSciNet  Google Scholar 

  7. Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)

    Article  MathSciNet  Google Scholar 

  8. Babaie-Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer. Algor. 72(2), 425–433 (2016)

    Article  MathSciNet  Google Scholar 

  9. Babaie-Kafaki, S.: A hybrid scaling parameter for the scaled memoryless BFGS method based on the \(\ell _{\infty }\) matrix norm. Int. J. Comput. Math. 96(8), 1595–1602 (2019)

    Article  MathSciNet  Google Scholar 

  10. Barzilai, J., Borwein, J.M.: Two-point stepsize gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  11. Becker, S., Bobin, J., Candès, E.J.: NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)

    Article  MathSciNet  Google Scholar 

  12. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MathSciNet  Google Scholar 

  13. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  14. Chen, X., Fukushima, M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Math. Program. 85(2), 313–334 (1999)

    Article  MathSciNet  Google Scholar 

  15. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002). (Ser.a)

    Article  MathSciNet  Google Scholar 

  16. Esmaeili, H., Shabani, S., Kimiaei, M.: A new generalized shrinkage conjugate gradient method for sparse recovery. Calcolo 56(1), 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  17. Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6(4), 1106–1120 (1996)

    Article  MathSciNet  Google Scholar 

  18. Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)

    Article  MathSciNet  Google Scholar 

  19. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Software 29(4), 373–394 (2003)

    Article  Google Scholar 

  20. Hager, W.W., Zhang, H.: Algorithm 851: CG\(_{-}\)Descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)

    Article  MathSciNet  Google Scholar 

  21. Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(l_1\) – regularized minimization with applications to compressed sensing. CAAM TR07–07, Rice University, 43, 44 (2007)

  22. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J. Comput. Math. 28(1), 170–194 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Huber, P.J.: Robust regression: asymptotics, conjectures and monte carlo. Ann. Stat. 1(5), 799–821 (1973)

    Article  MathSciNet  Google Scholar 

  24. Liu, G., Jing, L., Han, L.: Convergence properties of the DFP algorithm for unconstrained optimization. Optimization 51(5), 731–758 (2002)

    Article  MathSciNet  Google Scholar 

  25. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  26. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  27. Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20(5), 845–862 (1973)

  28. Oren, S.S., Spedicato, E.: Optimal conditioning of self-scaling variable metric algorithms. Math. Program. 10(1), 70–90 (1976)

    Article  MathSciNet  Google Scholar 

  29. Pu, D.: Convergence of the DFP algorithm without exact line search. J. Optim. Theory Appl. 112, 187–21 (2002)

    Article  MathSciNet  Google Scholar 

  30. Pu, D., Tian, W.: A class of DFP algorithms with revised search directions. Numer. Funct. Anal. Optim. 23(3–4), 383–400 (2002)

    Article  MathSciNet  Google Scholar 

  31. Pu, D., Tian, W.: The revised DFP algorithm without exact line search. J. Comput. Appl. Math. 154(2), 319–339 (2003)

    Article  MathSciNet  Google Scholar 

  32. Rauf, A.I., Fukushima, M.: Globally convergent BFGS method for nonsmooth convex optimization. J. Optim. Theory Appl. 104(3), 539–558 (2000)

    Article  MathSciNet  Google Scholar 

  33. Sugiki, K., Narushima, Y., Yabe, H.: Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization. J. Optim. Theory Appl. 153(3), 733–757 (2012)

    Article  MathSciNet  Google Scholar 

  34. Sun, W., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)

    MATH  Google Scholar 

  35. Watkins, D.S.: Fundamentals of Matrix Computations. John Wiley and Sons, New York (2002)

    Book  Google Scholar 

  36. Yao, X., Wang, Z.: Broad echo state network for multivariate time series prediction. J. Franklin Inst. 356(9), 4888–4906 (2019)

    Article  MathSciNet  Google Scholar 

  37. Yin, F., Wang, Y.N., Wei, S.N.: Inverse kinematic solution for robot manipulator based on electromagnetism-like and modified DFP algorithms. Acta Autom. Sin. 37(1), 74–82 (2011)

    Article  MathSciNet  Google Scholar 

  38. Zhang, H., Wang, K., Zhou, X., Wang, W.: Using DFP algorithm for nodal demand estimation of water distribution networks. KSCE J. Civ. Eng. 22, 2747–2754 (2018)

    Article  Google Scholar 

  39. Zhu, H., Xiao, Y., Wu, S.Y.: Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique. Comput. Math. Appl. 66(1), 24–32 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the grant no. 31.99.21870 from the Research Council of Semnan University. The authors thank the anonymous reviewers for their valuable comments that helped to improve the quality of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saman Babaie-Kafaki.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aminifard, Z., Babaie-Kafaki, S. Analysis of the Maximum Magnification by the Scaled Memoryless DFP Updating Formula with Application to Compressive Sensing. Mediterr. J. Math. 18, 255 (2021). https://doi.org/10.1007/s00009-021-01905-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00009-021-01905-3

Keywords

Mathematics Subject Classification

Navigation