Skip to main content
Log in

Continuation methods with the trusty time-stepping scheme for linearly constrained optimization with noisy data

  • Research Article
  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

The nonlinear optimization problem with linear constraints has many applications in engineering fields such as the visual-inertial navigation and localization of an unmanned aerial vehicle maintaining the horizontal flight. In order to solve this practical problem efficiently, this paper constructs a continuation method with the trusty time-stepping scheme for the linearly equality-constrained optimization problem at every sampling time. At every iteration, the new method only solves a system of linear equations other than the traditional optimization method such as the sequential quadratic programming (SQP) method, which needs to solve a quadratic programming subproblem. Consequently, the new method can save much more computational time than SQP. Numerical results show that the new method works well for this problem and its consumed time is about one fifth of that of SQP (the built-in subroutine fmincon.m of the MATLAB2018a environment) or that of the traditional dynamical method (the built-in subroutine ode15s.m of the MATLAB2018a environment). Furthermore, we also give the global convergence analysis of the new method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Allgower EL, Georg K (2003) Introduction to numerical continuation methods. SIAM, Philadelphia, PA

    Book  Google Scholar 

  • Ascher UM, Petzold LR (1998) Computer methods for ordinary differential equations and differential-algebraic equations. SIAM, Philadelphia, PA

    Book  Google Scholar 

  • Bertsekas DP (2018) Nonlinear programming, 3rd edn. Tsinghua University Press, Beijing

    MATH  Google Scholar 

  • Brown AA, Bartholomew-Biggs MC (1989) ODE versus SQP methods for constrained optimization. J Optim Theory Appl 62(3):371–386

    Article  MathSciNet  Google Scholar 

  • Brenan KE, Campbell SL, Petzold LR (1996) Numerical solution of initial-value problems in differential-algebraic equations. SIAM, Philadelphia, PA

    MATH  Google Scholar 

  • Carlberg K (2009) Lecture notes of constrained optimization. https://www.sandia.gov/~ktcarlb/opt_class/OPT_Lecture3.pdf

  • Caballero F, Merino L, Ferruz J, Ollero A (2009) Vision-based odometry and SLAM for medium and high altitude flying UAVs. J Intell Robot Syst 54(1–3):137–161

    Article  Google Scholar 

  • Coffey TS, Kelley CT, Keyes DE (2003) Pseudotransient continuation and differential-algebraic equations. SIAM J Sci Comput 25:553–569

    Article  MathSciNet  Google Scholar 

  • Conn AR, Gould N, Toint PhL (2000) Trust-region methods. SIAM, Philadelphia, USA

    Book  Google Scholar 

  • Ellingson G, Brink K, McLainm T (2018) Relative visual-inertial odometry for fixed-wing aircraft in GPS-denied environments. In: IEEE/ION position. Location and navigation symposium (PLANS) pp 786–792

  • Fiacco AV, McCormick GP (1990) Nonlinear programming: sequential unconstrained minimization techniques. SIAM, Philadelphia, PA

    Book  Google Scholar 

  • Fletcher R, Powell MJD (1963) A rapidly convergent descent method for minimization. Comput J 6:163–168

    Article  MathSciNet  Google Scholar 

  • Goh BS (2011) Approximate greatest descent methods for optimization with equality constraints. J Optim Theory Appl 148(3):505–527

    Article  MathSciNet  Google Scholar 

  • Goldfarb D (1970) A family of variable metric updates derived by variational means. Math Comput 24:23–26

    Article  Google Scholar 

  • Golub GH, Van Loan CF (2013) Matrix computations, 4th edn. The Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  • Hairer E, Wanner G (1996) Solving ordinary differential equations II, stiff and differential-algebraic problems, 2nd edn. Springer-Verlag, Berlin

    MATH  Google Scholar 

  • Heinkenschloss M (1996) Projected sequential quadratic programming methods. SIAM J Optim 6:373–417

    Article  MathSciNet  Google Scholar 

  • Higham DJ (1999) Trust region algorithms and timestep selection. SIAM J Numer Anal 37:194–210

    Article  MathSciNet  Google Scholar 

  • Hartley R, Zisserman A (2003) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, New York

    MATH  Google Scholar 

  • Kelley CT, Liao L-Z, Qi L, Chu MT, Reese JP, Winton C (2008) Projected Pseudotransient Continuation. SIAM J Numer Anal 46:3071–3083

    Article  MathSciNet  Google Scholar 

  • Liu DG, Fei JG (2000) Digital simulation algorithms for dynamic systems (in Chinese). Science Press, Beijing

    Google Scholar 

  • Liu S-T, Luo X-L (2010) A method based on Rayleigh quotient gradient flow for extreme and interior eigenvalue problems. Linear Algebra Appl 432(7):1851–1863

    Article  MathSciNet  Google Scholar 

  • Luo X-L (2012) A dynamical method of DAEs for the smallest eigenvalue problem. J Comput Sci 3(3):113–119

    Article  Google Scholar 

  • Luo X-L, Kelley CT, Liao L-Z, Tam H-W (2009) Combining trust-region techniques and Rosenbrock methods to compute stationary points. J Optim Theory Appl 140(2):265–286

    Article  MathSciNet  Google Scholar 

  • Luo X-L, Lin J-R, and Wu W-L (2013) A prediction-correction dynamic method for large-scale generalized eigenvalue problems. Abstr Appl Anal. Article ID 845459: 1–8. http://dx.doi.org/10.1155/2013/845459

  • Luo X-L, Lv J-H and Sun G (2020) A visual-inertial navigation method for high-speed unmanned aerial vehicles. http://arxiv.org/abs/2002.04791

  • Mak M-W (2019) Lecture notes of constrained optimization and support vector machines. http://www.eie.polyu.edu.hk/~mwmak/EIE6207/ContOpt-SVM-beamer.pdf

  • MATLAB 9.4.0 (R2018a) (2018) The MathWorks Inc., http://www.mathworks.com.

  • Nocedal J, Wright SJ (1999) Numerical optimization. Springer-Verlag, Berlin

    Book  Google Scholar 

  • Kim NH (2010) Leture notes of constrained optimization. https://mae.ufl.edu/nkim/eas6939/ConstrainedOpt.pdf

  • Obsborne MJ (2016) Mathematical methods for economic theory. https://mjo.osborne.economics.utoronto.ca/index.php/tutorial/index/1/mem

  • Powell MJD (1975) Convergence properties of a class of minimization algorithms. In: Mangasarian OL, Meyer RR, Robinson SM (eds) Nonlinear programming 2. Academic Press, New York, pp 1–27

    Google Scholar 

  • Schropp J (2000) A dynamical systems approach to constrained minimization. Numer Funct Anal Optim 21(3–4):537–551

    Article  MathSciNet  Google Scholar 

  • Schropp J (2003) One- and multistep discretizations of index 2 differential algebraic systems and their use in optimization. J Comput Appl Math 150:375–396

    Article  MathSciNet  Google Scholar 

  • Shampine LS (2002) Solving \(0 = F(t, y(t), y^{\prime }(t))\) in Matlab. J Numer Math 10(4):291–310

    Article  MathSciNet  Google Scholar 

  • Tanabe K (1980) A geometric method in nonlinear programming. J Optim Theory Appl 30(2):181–210

    Article  MathSciNet  Google Scholar 

  • Yamashita H (1980) A differential equation approach to nonlinear programming. Math Program 18:155–168. https://doi.org/10.1007/BF01588311

    Article  MathSciNet  MATH  Google Scholar 

  • Yuan Y (2015) Recent advances in trust region algorithms. Math Program 151:249–281

    Article  MathSciNet  Google Scholar 

  • Zhang J, Singh S (2015) Visual-intetial combined odometry system for aerial vehicles. J Field Robot 32(8):1043–1055

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by Grant 61876199 from National Natural Science Foundation of China, Grant YBWL2011085 from Huawei Technologies Co., Ltd., and Grant YJCB2011003HI from the Innovation Research Program of Huawei Technologies Co., Ltd.. The authors are grateful to Prof. Hongchao Zhang, Prof. Li-Zhi Liao and two anonymous referees for their comments and suggestions which greatly improve the presentation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin-long Luo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A test problems

A test problems

Example 1

$$\begin{aligned} \quad m&= n/2 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/2} \;\left( x_{2k-1}^{2} + 10x_{2k}^{2}\right) , \; \text {subject to} \; x_{2i-1} + x_{2i} = 4, \; i = 1, \, 2, \ldots , \, m. \end{aligned}$$

This problem is extended from the problem of Kim (2010). We assume that the feasible initial point is \((2, \, 2, \, \ldots , \, 2, \, 2)\).

Example 2

$$\begin{aligned} \quad m&= n/3 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/2} \; \left( \left( x_{2k-1} -2\right) ^{2} + 2\left( x_{2k} - 1 \right) ^{4}\right) - 5, \;\\&\text {subject to} \; x_{3i-2} + 4x_{3i-1}+2x_{3i} = 3, \; i = 1, \, 2, \ldots , \, n/3. \end{aligned}$$

We assume that the infeasible initial point is \((-0.5, \, 1.5, \, 1, \, 0, \, \ldots , \, 0, \, 0)\).

Example 3

$$\begin{aligned} \quad m&= (2/3)n \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n}\; x_{k}^{2}, \; \text {subject to} \; x_{3i-2}\\&+ 2x_{3i-1} + x_{3i} = 1, \; 2 x_{3i-2} - x_{3i-1} - 3 x_{3i} = 4, \; i = 1, \, 2, \ldots , \, n/3. \end{aligned}$$

This problem is extended from the problem of Obsborne (2016). The infeasible initial point is \((1, \, 0.5, \, -1, \, \ldots , \, 1, \, 0.5, \, -1)\).

Example 4

$$\begin{aligned} \quad m&= n/2 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/2}\;\left( x_{2k-1}^{2} + x_{2k}^{6}\right) - 1, \; \text {subject to} \; x_{2i-1} + x_{2i} = 1,\\&i = 1, \, 2, \, \ldots , \, n/2. \end{aligned}$$

This problem is modified from the problem of Mak (2019). We assume that the infeasible initial point is \((1, \, 1, \, \ldots , \, 1)\).

Example 5

$$\begin{aligned} \quad m&= n/2 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/2}\;\left( \left( x_{2k-1} -2\right) ^{4} + 2\left( x_{2k} -1\right) ^{6}\right) - 5,\\&\text {subject to} \; x_{2i-1} + 4x_{2i} = 3, \; i = 1, \, 2, \, \ldots , \, m. \end{aligned}$$

We assume that the feasible initial point is \((-1, \, 1,\, -1, \, 1, \, \ldots , \, -1, \, 1)\).

Example 6

$$\begin{aligned} \quad m&= (2/3)n \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/3}\;\left( x_{3k-2}^{2} + x_{3k-1}^{4} + x_{3k}^{6}\right) , \\&\text {subject to} \; {x_{3i-2}} + 2{x_{3i-1}} + {x_{3i}} = 1, \; 2{x_{3i-2}} - {x_{3i-1}} - 3{x_{3i}} = 4,\\ i&= 1, \, 2, \, \ldots , \, m/2. \end{aligned}$$

This problem is extended from the problem of Obsborne (2016). We assume that the infeasible initial point is \((2, \, 0, \, \ldots , \, 0)\).

Example 7

$$\begin{aligned} \quad m&= n/2 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/2}\;\left( x_{2k-1}^{4} + 3x_{2k}^{2}\right) , \; \text {subject to} \; x_{2i-1} + x_{2i} = 4, \; i = 1, \, 2, \, \ldots , \, n/2. \end{aligned}$$

This problem is extended from the problem of Carlberg (2009). We assume that the infeasible initial point is \((2, \, 2, \, 0, \, \ldots , \, 0, \, 0)\).

Example 8

$$\begin{aligned} \quad m&= n/3 \\ \min _{x \in \mathfrak {R}{^n}} \; f(x)&= \sum _{k=1}^{n/3}\;\left( x_{3k-2}^{2} + x_{3k-2}^{2} \, x_{3k}^{2} + 2x_{3k-2} \, x_{3k-1} + x_{3k-1}^{4} + 8x_{3k-1}\right) ,\\ \text {subject to} \;&2x_{3i-2} + 5x_{3i-1}+x_{3i} = 3, \; i = 1, \, 2, \, \ldots , \, m. \end{aligned}$$

We assume that the infeasible initial point is \((1.5, \, 0, \, 0, \, \ldots , \, 0)\).

Example 9

$$\begin{aligned} \quad m&= n/2 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k =1}^{n/2} \; \left( x_{2k-1}^{4} + 10x_{2k}^{6}\right) ,\; \text {subject to} \; x_{2i-1} + x_{2i} =4, \; i = 1, \, 2, \, \ldots , \, m. \end{aligned}$$

This problem is extended from the problem of Kim (2010). We assume that the feasible initial point is \((2, \, 2, \, \ldots , \, 2, \, 2)\).

Example 10

$$\begin{aligned} \quad m&= n/3 \\ \min _{x \in \mathfrak {R}^{n}} \; f(x)&= \sum _{k=1}^{n/3}\;\left( x_{3k-2}^{8} + x_{3k-1}^{6} + x_{3k}^{2}\, \right) ,\; \text {subject to} \; x_{3i-2} + 2x_{3i-1} + 2x_{3i} =1,\\ i&= 1, \, 2, \, \ldots , \,m. \end{aligned}$$

This problem is modified from the problem of Yamashita (1980). The feasible initial point is \((1, \, 0, \, 0, \, \ldots , \, 1, \, 0, \, 0)\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, Xl., Lv, Jh. & Sun, G. Continuation methods with the trusty time-stepping scheme for linearly constrained optimization with noisy data. Optim Eng 23, 329–360 (2022). https://doi.org/10.1007/s11081-020-09590-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-020-09590-z

Keywords

Mathematics Subject Classification

Navigation