Abstract
Undesirable effects of the direction of the maximum magnification by the scaled memoryless DFP updating formula are studied. Then, with the aim of defeating such effects, a modified scaling parameter for the memoryless DFP method is obtained. Convergence analysis of the modified method is concisely brought up as well. Eventually, the performance of the method is numerically examined on some standard test problems as well as the well-known compressive sensing problem, for which also a smooth relaxation of the \(\ell _1\)-norm regularization term is proposed. Results illustrate the computational efficiency of the given method.
Similar content being viewed by others
References
Aminifard, Z., Babaie-Kafaki, S.: Matrix analyses on the Dai-Liao conjugate gradient method. ANZIAM J. 61, 195–203 (2019)
Aminifard, Z., Babaie-Kafaki, S.: A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions. Calcolo 56(2), 1–11 (2019)
Aminifard, Z., Babaie-Kafaki, S.: An optimal parameter choice for the Dai–Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix. 4OR 17(3), 317–330 (2019)
Aminifard, Z., Babaie-Kafaki, S.: A restart scheme for the Dai-Liao conjugate gradient method by ignoring a direction of maximum magnification by the search direction matrix. RAIRO-Oper. Res. 54(4), 981–991 (2020)
Aminifard, Z., Babaie-Kafaki, S., Ghafoori, S.: An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing. Appl. Numer. Math. 67, 187–201 (2021)
Babaie-Kafaki, S.: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361–374 (2013)
Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)
Babaie-Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer. Algor. 72(2), 425–433 (2016)
Babaie-Kafaki, S.: A hybrid scaling parameter for the scaled memoryless BFGS method based on the \(\ell _{\infty }\) matrix norm. Int. J. Comput. Math. 96(8), 1595–1602 (2019)
Barzilai, J., Borwein, J.M.: Two-point stepsize gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)
Becker, S., Bobin, J., Candès, E.J.: NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)
Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)
Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
Chen, X., Fukushima, M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Math. Program. 85(2), 313–334 (1999)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002). (Ser.a)
Esmaeili, H., Shabani, S., Kimiaei, M.: A new generalized shrinkage conjugate gradient method for sparse recovery. Calcolo 56(1), 1–38 (2019)
Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6(4), 1106–1120 (1996)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Software 29(4), 373–394 (2003)
Hager, W.W., Zhang, H.: Algorithm 851: CG\(_{-}\)Descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)
Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(l_1\) – regularized minimization with applications to compressed sensing. CAAM TR07–07, Rice University, 43, 44 (2007)
Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J. Comput. Math. 28(1), 170–194 (2010)
Huber, P.J.: Robust regression: asymptotics, conjectures and monte carlo. Ann. Stat. 1(5), 799–821 (1973)
Liu, G., Jing, L., Han, L.: Convergence properties of the DFP algorithm for unconstrained optimization. Optimization 51(5), 731–758 (2002)
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)
Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20(5), 845–862 (1973)
Oren, S.S., Spedicato, E.: Optimal conditioning of self-scaling variable metric algorithms. Math. Program. 10(1), 70–90 (1976)
Pu, D.: Convergence of the DFP algorithm without exact line search. J. Optim. Theory Appl. 112, 187–21 (2002)
Pu, D., Tian, W.: A class of DFP algorithms with revised search directions. Numer. Funct. Anal. Optim. 23(3–4), 383–400 (2002)
Pu, D., Tian, W.: The revised DFP algorithm without exact line search. J. Comput. Appl. Math. 154(2), 319–339 (2003)
Rauf, A.I., Fukushima, M.: Globally convergent BFGS method for nonsmooth convex optimization. J. Optim. Theory Appl. 104(3), 539–558 (2000)
Sugiki, K., Narushima, Y., Yabe, H.: Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization. J. Optim. Theory Appl. 153(3), 733–757 (2012)
Sun, W., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)
Watkins, D.S.: Fundamentals of Matrix Computations. John Wiley and Sons, New York (2002)
Yao, X., Wang, Z.: Broad echo state network for multivariate time series prediction. J. Franklin Inst. 356(9), 4888–4906 (2019)
Yin, F., Wang, Y.N., Wei, S.N.: Inverse kinematic solution for robot manipulator based on electromagnetism-like and modified DFP algorithms. Acta Autom. Sin. 37(1), 74–82 (2011)
Zhang, H., Wang, K., Zhou, X., Wang, W.: Using DFP algorithm for nodal demand estimation of water distribution networks. KSCE J. Civ. Eng. 22, 2747–2754 (2018)
Zhu, H., Xiao, Y., Wu, S.Y.: Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique. Comput. Math. Appl. 66(1), 24–32 (2013)
Acknowledgements
This research was supported by the grant no. 31.99.21870 from the Research Council of Semnan University. The authors thank the anonymous reviewers for their valuable comments that helped to improve the quality of this work.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Aminifard, Z., Babaie-Kafaki, S. Analysis of the Maximum Magnification by the Scaled Memoryless DFP Updating Formula with Application to Compressive Sensing. Mediterr. J. Math. 18, 255 (2021). https://doi.org/10.1007/s00009-021-01905-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00009-021-01905-3