Skip to main content
Log in

The Regularization Continuation Method for Optimization Problems with Nonlinear Equality Constraints

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

This paper considers the regularization continuation method and the trust-region updating strategy for the nonlinearly equality-constrained optimization problem. Namely, it uses the inverse of the regularization quasi-Newton matrix as the pre-conditioner and the Sherman–Morrison–Woodbury formula to improve its computational efficiency in the well-posed phase, and it adopts the inverse of the regularization two-sided projection of the Hessian as the pre-conditioner to improve its robustness in the ill-conditioned phase. Since it only solves a linear system of equations and the Sherman–Morrison–Woodbury formula significantly saves the computational time at every iteration, and the sequential quadratic programming (SQP) method needs to solve a quadratic programming subproblem at every iteration, it is faster than SQP. Numerical results also show that it is more robust and faster than SQP (the built-in subroutine fmincon.m of the MATLAB2020a environment and the subroutine SNOPT executed in GAMS v28.2 (GAMS Corporation, 2019) environment). The computational time of the new method is about one third of that of fmincon.m for the large-scale problem. Finally, the global convergence analysis of the new method is also given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability Statement

If it is requested, we will provide the test data.

Code Availability

If it is requested, we will provide the code.

References

  1. Andrei, N.: An unconstrained optimization test functions collection. Adv. Model Optim. 10, 147–161 (2008)

    MathSciNet  Google Scholar 

  2. Adorio, E.P., Diliman, U.P.: MVF—Multivariate test functions library in C for unconstrained global optimization, http://www.geocities.ws/eadorio/mvf.pdf (2005)

  3. Allgower, E.L., Georg, K.: Introduction to Numerical Continuation Methods. SIAM, Philadelphia (2003)

    Google Scholar 

  4. Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM, Philadelphia (1998)

    Google Scholar 

  5. Boggs, P.T., Tolle, J.W.: Sequential quadratic programming. Acta Numer. 4, 1–51 (1995)

    MathSciNet  Google Scholar 

  6. Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. J. Inst. Math. Appl. 6, 76–90 (1970)

    Google Scholar 

  7. Brown, A.A., Bartholomew-Biggs, M.C.: ODE versus SQP methods for constrained optimization. J. Optim. Theory Appl. 62, 371–386 (1989)

    MathSciNet  Google Scholar 

  8. Brenan, K.E., Campbell, S.L., Petzold, L.R.: Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. SIAM, Philadelphia (1996)

    Google Scholar 

  9. Bioucas-Dias, J.M., Figueiredo, M.A.T.: Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixing. In: 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, 1–4, http://doi.org/10.1109/WHISPERS.2010.5594963 (2010)

  10. Butcher, J.C., Jackiewicz, Z.: Construction of high order diagonally implicit multistage integration methods for ordinary differential equations. Appl. Numer. Math. 27, 1–12 (1998)

    MathSciNet  Google Scholar 

  11. Byrd, R., Nocedal, J., Yuan, Y.X.: Global convergence of a class of quasi-Newton methods on convex problems. SIAM J. Numer. Anal. 24, 1171–1189 (1987)

    MathSciNet  Google Scholar 

  12. Caballero, F., Merino, L., Ferruz, J., Ollero, A.: Vision-based odometry and SLAM for medium and high altitude flying UAVs. J. Intell. Robot. Syst. 54, 137–161 (2009)

    Google Scholar 

  13. Coffey, T.S., Kelley, C.T., Keyes, D.E.: Pseudotransient continuation and differential-algebraic equations. SIAM J. Sci. Comput. 25, 553–569 (2003)

    MathSciNet  Google Scholar 

  14. Conn, A.R., Gould, N., Toint, Ph.L.: Trust-Region Methods. SIAM, Philadelphia (2000)

    Google Scholar 

  15. Chu, M.T., Lin, M.M.: Dynamical system characterization of the central path and its variants-a revisit. SIAM J. Appl. Dyn. Syst. 10, 887–905 (2011)

    MathSciNet  Google Scholar 

  16. d’Aspremont, A., El Ghaoui, L., Jordan, M., Lanckriet, G.R.: A direct formulation for sparse PCA using semidefinite programming. SIAM Rev. 49, 434–448 (2007)

    MathSciNet  Google Scholar 

  17. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program 91, 201–213 (2002)

    MathSciNet  Google Scholar 

  18. Edelman, A., Arias, T.A., Smith, S.T.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20, 303–353 (1999)

    MathSciNet  Google Scholar 

  19. Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques, SIAM (1990)

  20. Fletcher, R.: A new approach to variable metric algorithms. Comput J 13, 317–322 (1970)

    Google Scholar 

  21. Fletcher, R., Powell, M.J.D.: A rapidly convergent descent method for minimization. Comput J 6, 163–168 (1963)

    MathSciNet  Google Scholar 

  22. Fadili, J.M., Starck, J.L.: Monotone operator splitting for optimization problems in sparse recovery. In: IEEE ICIP, Cairo, Egypt, pp. 1461–1464, http://doi.org/10.1109/ICIP.2009.5414555 (2009)

  23. Figueiredo, M.A.T., Bioucas-Dias, J.M.: Restoration of Poissonian images using alternating direction optimization. IEEE Trans. Image Process. 19, 3133–3145 (2010)

    MathSciNet  Google Scholar 

  24. Forero, P.A., Cano, A., Giannakis, G.B.: Consensus-based distributed support vector machines. J. Mach. Learn. Res. 11, 1663–1707 (2010)

    MathSciNet  Google Scholar 

  25. GAMS v28.2, GAMS Corporation, https://www.gams.com/ (2019)

  26. Goh, B.S.: Approximate greatest descent methods for optimization with equality constraints. J. Optim. Theory Appl. 148, 505–527 (2011)

    MathSciNet  Google Scholar 

  27. Goldfarb, D.: A family of variable metric updates derived by variational means. Math. Comput. 24, 23–26 (1970)

    Google Scholar 

  28. Gould, N.I.M., Orban, D., Toint, Ph.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60, 545–557 (2015)

  29. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. The Johns Hopkins University Press, Baltimore (2013)

    Google Scholar 

  30. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1981)

    Google Scholar 

  31. Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: an SQP algorithm for large-scale constrained optimization. SIAM Rev. 47, 99–131 (2005)

    MathSciNet  Google Scholar 

  32. Gill, P.E., Murray, W., Saunders, M.A.: User’s guide for SQOPT Version 7: software for large-scale linear and quadratic programming (2006)

  33. Han, S.P.: A globally convergent method for nonlinear programming. J. Optim. Theory Appl. 22, 297–307 (1977)

    MathSciNet  Google Scholar 

  34. Hansen, P.C.: Regularization tools: a MATLAB package for analysis and solution of discrete ill-posed problems. Numer Algorithms 6, 1–35 (1994)

    MathSciNet  Google Scholar 

  35. Helmke, U., Moore, J.B.: Optimization and Dynamical Systems, 2nd edn. Springer, London (1996)

    Google Scholar 

  36. Higham, D.J.: Trust region algorithms and timestep selection. SIAM J. Numer. Anal. 37, 194–210 (1999)

    MathSciNet  Google Scholar 

  37. Hock, W., Schittkowski, K.: A comparative performance evaluation of 27 nonlinear programming codes. Computing 30, 335–358 https://doi.org/10.1007/BF02242139 (1983)

  38. Jackiewicz, Z., Tracogna, S.: A general class of two-step Runge–Kutta methods for ordinary differential equations. SIAM J. Numer. Anal. 32, 1390–1427 (1995)

    MathSciNet  Google Scholar 

  39. Kelley, C.T., Liao, L.-Z., Qi, L., Chu, M.T., Reese, J.P., Winton, C.: Projected pseudotransient continuation. SIAM J. Numer. Anal. 46, 3071–3083 (2008)

    MathSciNet  Google Scholar 

  40. Lee, J.H., Jung, Y.M., Yuan, Y.X., Yun, S.: A subsapce SQP method for equality constrained optimization. Comput. Optim. Appl. 74, 177–194 (2019)

    MathSciNet  Google Scholar 

  41. Lukšan, L.: Inexact trust region method for large sparse systems of nonlinear equations. J. Optim. Theory Appl. 81, 569–590 (1994)

    MathSciNet  Google Scholar 

  42. Levenberg, K.: A method for the solution of certain problems in least squares. Q. Appl. Math. 2, 164–168 (1944)

    MathSciNet  Google Scholar 

  43. Liao, L.-Z., Qi, H.D., Qi, L.Q.: Neurodynamical optimization. J. Glob. Optim. 28, 175–195 (2004)

    MathSciNet  Google Scholar 

  44. Liu, X.-W., Yuan, Y.-Y.: A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization. SIAM J. Optim. 21, 545–571 (2011)

    MathSciNet  Google Scholar 

  45. Lu, Z.S., Pong, K.K., Zhang, Y.: An alternating direction method for finding Dantzig selectors. Comput. Stat. Data Anal. 56, 4037–4046, https://doi.org/10.1016/j.csda.2012.04.019 (2012)

  46. Luo, X.-L.: Singly diagonally implicit Runge–Kutta methods combining line search techniques for unconstrained optimization. J. Comput. Math. 23, 153–164 (2005)

    MathSciNet  Google Scholar 

  47. Luo, X.-L., Liao, L.-Z., Tam, H.-W.: Convergence analysis of the Levenberg–Marquardt method. Optim. Methds Softw. 22, 659–678 (2007)

    MathSciNet  Google Scholar 

  48. Liu, S.-T., Luo, X.-L.: A method based on Rayleigh quotient gradient flow for extreme and interior eigenvalue problems. Linear Algebra Appl. 432, 1851–1863 (2010)

    MathSciNet  Google Scholar 

  49. Luo, X.-L.: A dynamical method of DAEs for the smallest eigenvalue problem. J. Comput. Sci. 3, 113–119 (2012)

    Google Scholar 

  50. Luo, X.-L., Lin, J.-R., Wu, W.-L.: A prediction-correction dynamic method for large-scale generalized eigenvalue problems. Abstr Appl Anal (2013), Article ID 845459, 1–8, http://dx.doi.org/10.1155/2013/845459

  51. Luo, X.-L., Lv, J.-H., Sun, G.: Continuation methods with the trusty time-stepping scheme for linearly constrained optimization with noisy data. Optim. Eng. 23, 329–360, https://doi.org/10.1007/s11081-020-09590-z (2022)

  52. Luo, X.-L., Xiao, H., Lv, J.-H.: Continuation Newton methods with the residual trust-region time-stepping scheme for nonlinear equations. Numer Algorithms 89, 223–247, https://doi.org/10.1007/s11075-021-01112-x (2022)

  53. Luo, X.-L., Yao, Y.-Y.: Primal-dual path-following methods and the trust-region updating strategy for linear programming with noisy data. J. Comput. Math. 40, 760–780, https://doi.org/10.4208/jcm.2101-m2020-0173 (2022)

  54. Luo, X.-L., Xiao, H., Lv, J.-H., Zhang, S.: Explicit pseudo-transient continuation and the trust-region updating strategy for unconstrained optimization. Appl. Numer. Math. 165, 290–302, https://doi.org/10.1016/j.apnum.2021.02.019 (2021)

  55. Luo, X.-L., Xiao, H.: Generalized continuation Newton methods and the trust-region updating strategy for the underdetermined system. J. Sci. Comput. 88, article 56, pp. 1–22. https://doi.org/10.1007/s10915-021-01566-0 (2021)

  56. Luo, X.-L., Xiao, H.: The regularization continuation method with an adaptive time step control for linearly constrained optimization problems. Appl. Numer. Math. 181, 255–276, published online at https://doi.org/10.1016/j.apnum.2022.06.008 (2022)

  57. Luo, X.-L., Xiao, H., Zhang, S.: Continuation Newton methods with deflation techniques for global optimization problems, arXiv preprint available at http://arxiv.org/abs/2107.13864, or Research Square preprint available at https://doi.org/10.21203/rs.3.rs-1102775/v1, July 30, 2021. Software available at https://teacher.bupt.edu.cn/luoxinlong/zh_CN/zzcg/41406/list/index.htm

  58. Luo, X.-L., Zhang, S., Xiao, H.: Residual regularization path-following methods for linear complementarity problems, arXiv preprint available at http://arxiv.org/abs/2205.10727, pp. 1–30 (2022)

  59. Ng, M., Weiss, P. Yuan, X.-M.: Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM. J. Sci. Comput. 32, 2710–2736, https://doi.org/10.1137/090774823 (2010)

  60. Mascarenhas, M.F.: The BFGS method with exact line searches fails for non-convex objective functions. Math. Program 99, 49–61 (2004)

    MathSciNet  Google Scholar 

  61. Mak, M.-W.: Lecture notes of constrained optimization and support vector machines, http://www.eie.polyu.edu.hk/~mwmak/EIE6207/ContOpt-SVM-beamer.pdf (2019)

  62. MATLAB v9.8.0 (R2020a), The MathWorks Inc., http://www.mathworks.com (2020)

  63. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)

    MathSciNet  Google Scholar 

  64. Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11, 431–441 (1963)

    MathSciNet  Google Scholar 

  65. Maculan, N., Lavor, C.: A function to test methods applied to global minimization of potential energy of molecules. Numer. Algorithms 35, 287–300 (2004)

    MathSciNet  Google Scholar 

  66. Moore, E.H.: On the reciprocal of the general algebraic matrix. Bull. New Ser. Am. Math. Soc. 26, 394–395 (1920)

    Google Scholar 

  67. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (1999)

    Google Scholar 

  68. Osborne, M.J.: Mathematical methods for economic theory, https://mjo.osborne.economics.utoronto.ca/index.php/tutorial/index/1/mem (2016)

  69. Pan, P.-Q.: New ODE methods for equality constrained optimization (2): algorithms. J. Comput. Math. 10, 129–146 (1992)

    MathSciNet  Google Scholar 

  70. Penrose, R.A.: Generalized inverses for matrices. Math. Proc. Camb. Philos. Soc. 51, 406–413 (1955)

    Google Scholar 

  71. Powell, M.J.D.: Convergence properties of a class of minimization algorithms. In: Mangasarian, O.L., Meyer, R.R., Robinson, S.M. (eds.) Nonlinear Programming 2, pp. 1–27. Academic Press, New York (1975)

    Google Scholar 

  72. Powell, M.J.D.: A fast algorithm for nonlinearly constrained optimization calculations. In: Watson, G.A. (ed.) Numerical Analysis, pp. 144–157. Springer, Berlin (1978)

    Google Scholar 

  73. Powell, M.J.D.: The convergence of variable metric methods for nonlinearly constrained optimization calculations. In: Mangasarian, O.L., Meyer, R.R., Robinson, S.M. (eds.) Nonlinear Programming 3, pp. 27–63. Academic Press, New York (1978)

    Google Scholar 

  74. Schittkowski, K.: NLPQL: a fortran subroutine solving constrained nonlinear programming problems. Ann. Oper. Res. 5, 485–500, https://doi.org/10.1007/BF02739235 (1986)

  75. Schropp, J.: A dynamical systems approach to constrained minimization. Numer. Funct. Anal. Optim. 21, 537–551 (2000)

    MathSciNet  Google Scholar 

  76. Schropp, J.: One and multistep discretizations of index 2 differential algebraic systems and their use in optimization. J. Comput. Appl. Math. 150, 375–396 (2003)

    MathSciNet  Google Scholar 

  77. Shampine, L.F., Gladwell, I., Thompson, S.: Solving ODEs with MATLAB. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  78. Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24, 647–656 (1970)

    MathSciNet  Google Scholar 

  79. Steidl, G., Teuber, T.: Removing multiplicative noise by Douglas–Rachford splitting methods. J. Math. Imaging Vis. 36, 168–184 (2010)

    MathSciNet  Google Scholar 

  80. Surjanovic, S., Bingham, D.: Virtual library of simulation experiments: Test functions and datasets, retrieved from http://www.sfu.ca/~ssurjano (2020)

  81. Sun, W.Y., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)

    Google Scholar 

  82. Tanabe, K.: A geometric method in nonlinear programming. J. Optim. Theory Appl. 30, 181–210 (1980)

    MathSciNet  Google Scholar 

  83. Tikhonov, A.N.: The stability of inverse problems. Dokl Akad Nauk SSRR 39, 176–179 (1943)

    MathSciNet  Google Scholar 

  84. Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Wiley, New York (1977)

    Google Scholar 

  85. Vanderbei, R., Lin, K., Liu, H., Wang, L.: Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods. Math. Prog. Comput. 8, 253–269, https://doi.org/10.1007/s12532-016-0105-y (2016)

  86. Wen, Z.-W., Yin, W.-T.: A feasible method for optimization with orthogonality constraints. Math. Program 142, 397–434 (2013)

    MathSciNet  Google Scholar 

  87. Wilson, R.B.: A Simplicial Method for Convex Programming, Ph.D. thesis, Harvard University (1963)

  88. Witten, D.M., Tibshirani, R., Hastie, T.: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10, 515–534 (2009)

    Google Scholar 

  89. Yamashita, H.: A differential equation approach to nonlinear programming. Math. Program 18, 155–168 (1980)

    MathSciNet  Google Scholar 

  90. Yuan, Y.: Recent advances in trust region algorithms. Math. Program 151, 249–281 (2015)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to two anonymous referees for their comments and suggestions which greatly improve the presentation of this paper.

Funding

This work was supported in part by Grants 61876199 and 62376036 from National Natural Science Foundation of China, Grant YBWL2011085 from Huawei Technologies Co., Ltd., and Grant YJCB2011003HI from the Innovation Research Program of Huawei Technologies Co., Ltd.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin-long Luo.

Ethics declarations

Conflict of interest

Not applicable.

Ethical Approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Tables of Numerical Results

Tables of Numerical Results

See Tables 3, 4, 5, 6, 7 and 8.

Table 3 Numerical results of Rcm, fmincon and SNOPT for CUTEst problems [28] (Exam. 1–20)
Table 4 Numerical results of Rcm, fmincon and SNOPT for CUTEst problems [28] (Exam. 21–40)
Table 5 Numerical results of Rcm, fmincon and SNOPT for CUTEst problems [28] (Exam. 41–65)
Table 6 Numerical results of Rcm, fmincon and SNOPT for large-scale problems with \(n = 2000, \; m = 10\)
Table 7 Numerical results of Rcm, fmincon and SNOPT for large-scale problems with \(n = 2000, \; m = 1000\)
Table 8 Numerical results of Rcm, fmincon and SNOPT for large-scale problems with \(n = 2000, \; m = 1999\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, Xl., Xiao, H. & Zhang, S. The Regularization Continuation Method for Optimization Problems with Nonlinear Equality Constraints. J Sci Comput 99, 17 (2024). https://doi.org/10.1007/s10915-024-02476-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-024-02476-7

Keywords

Mathematics Subject Classification

Navigation