Skip to main content
Log in

Optimization via Chebyshev polynomials

  • Original Research
  • Published:
Journal of Applied Mathematics and Computing Aims and scope Submit manuscript

Abstract

This paper presents for the first time a robust exact line-search method based on a full pseudospectral (PS) numerical scheme employing orthogonal polynomials. The proposed method takes on an adaptive search procedure and combines the superior accuracy of Chebyshev PS approximations with the high-order approximations obtained through Chebyshev PS differentiation matrices. In addition, the method exhibits quadratic convergence rate by enforcing an adaptive Newton search iterative scheme. A rigorous error analysis of the proposed method is presented along with a detailed set of pseudocodes for the established computational algorithms. Several numerical experiments are conducted on one- and multi-dimensional optimization test problems to illustrate the advantages of the proposed strategy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Baltensperger, R.: Improving the accuracy of the matrix differentiation method for arbitrary collocation points. Appl. Numer. Math. 33(1), 143–149 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  2. Baltensperger, R., Trummer, M.R.: Spectral differencing with a twist. SIAM J. Sci. Comput. 24(5), 1465–1487 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods in Fluid Dynamics. Springer Series in Computational Physics. Springer, Berlin (1988)

    Book  MATH  Google Scholar 

  4. Chong, E.K., Zak, S.H.: An Introduction to Optimization, vol. 76. Wiley, New York (2013)

    MATH  Google Scholar 

  5. Clenshaw, C.W., Curtis, A.R.: A method for numerical integration on an automatic computer. Numer. Math. 2, 197–205 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  6. Costa, B., Don, W.S.: On the computation of high order pseudospectral derivatives. Appl. Numer. Math. 33(1), 151–159 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  7. Day, D., Romero, L.: Roots of polynomials expressed in terms of orthogonal polynomials. SIAM J. Numer. Anal. 43(5), 1969–1987 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Elbarbary, E.M., El-Sayed, S.M.: Higher order pseudospectral differentiation matrices. Appl. Numer. Math. 55(4), 425–438 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Elgindy, K.T.: High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numer. Methods for Partial Differ. Equ. 32(1), 307–349 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Elgindy, K.T., Hedar, A.: A new robust line search technique based on Chebyshev polynomials. Appl. Math. Comput. 206(2), 853–866 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Elgindy, K.T., Smith-Miles, K.A.: Fast, accurate, and small-scale direct trajectory optimization using a Gegenbauer transcription method. J. Comput. Appl. Math. 251, 93–116 (2013a)

    Article  MathSciNet  MATH  Google Scholar 

  12. Elgindy, K.T., Smith-Miles, K.A.: Optimal Gegenbauer quadrature over arbitrary integration nodes. J. Comput. Appl. Math. 242, 82–106 (2013b)

    Article  MathSciNet  MATH  Google Scholar 

  13. Elgindy, K.T., Smith-Miles, K.A.: Solving boundary value problems, integral, and integro-differential equations using Gegenbauer integration matrices. J. Comput. Appl. Math. 237(1), 307–325 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Elgindy, K.T., Smith-Miles, K. A., Miller, B.: Solving optimal control problems using a Gegenbauer transcription method. In: 2012 2nd Australian Control Conference (AUCC), IEEE, pp 417–424 (2012).

  15. Gottlieb, D., Orszag, S.A.: Numerical analysis of spectral methods: theory and applications, no. 26. In: CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia (1977)

  16. Hesthaven, J.S.: From electrostatics to almost optimal nodal sets for polynomial interpolation in a simplex. SIAM J. Numer. Anal. 35(2), 655–676 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kopriva, D.A.: Implementing Spectral Methods for Partial Differential Equations: Algorithms for Scientists and Engineers. Springer, Berlin (2009)

    Book  MATH  Google Scholar 

  18. Mason, J.C., Handscomb, D.C.: Chebyshev Polynomials. Chapman & Hall/CRC, Boca Raton (2003)

    MATH  Google Scholar 

  19. Nickalls, R.: Viète, Descartes and the cubic equation. Math. Gaz. 203–208 (2006)

  20. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Berlin (2006)

    MATH  Google Scholar 

  21. Pedregal, P.: Introduction to Optimization, vol. 46. Springer, Berlin (2006)

    MATH  Google Scholar 

  22. Peherstorfer, F.: Zeros of linear combinations of orthogonal polynomials. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 117, pp. 533–544. Cambridge University Press, Cambridge (1995)

  23. Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to changes in the elements of a given column or a given row of the original matrix. In: Annals of Mathematical Statistics, vol. 20. Inst Mathematical Statistics IMS Business Office-Suite 7, 3401 Investment Blvd, Hayward, CA 94545, pp. 621–621 (1949)

  24. Snyder, M.A.: Chebyshev methods in numerical approximation. Prentice-Hall, vol. 2 (1966)

  25. Weideman, J.A.C., Reddy, S.C.: A MATLAB differentiation matrix suite. ACM Trans. Math. Softw. 26(4), 465–519 (2000)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kareem T. Elgindy.

Appendices

Appendix 1: Preliminaries

In this section, we present some useful results from approximation theory. The first kind Chebyshev polynomial (or simply the Chebyshev polynomial) of degree n, \(T_n(x)\), is given by the explicit formula

$$\begin{aligned} {T_n}(x) = \cos \left( {n\,{{\cos }^{ - 1}}(x)} \right) \;\forall x \in [ - 1,1], \end{aligned}$$
(8.1)

using trigonometric functions, or in the following explicit polynomial form [24]:

$$\begin{aligned} {T_n}(x) = \frac{1}{2}\sum \limits _{k = 0}^{\left\lfloor {n/2} \right\rfloor } {T_{n - 2k}^k\,{x^{n - 2k}}} , \end{aligned}$$
(8.2)

where

$$\begin{aligned} T_m^k = {2^m}{( - 1)^k}\left\{ {\frac{{m + 2\,k}}{{m + k}}} \right\} \left( \begin{array}{c} m + k\\ k \end{array} \right) ,\quad m,k \ge 0. \end{aligned}$$
(8.3)

The Chebyshev polynomials can be generated by the three-term recurrence relation

$$\begin{aligned} T_{n + 1} (x) = 2xT_n (x) - T_{n - 1} (x), \quad n \ge 1, \end{aligned}$$
(8.4)

starting with \({T_0}(x) = 1\) and \({T_1}(x) = x\). They are orthogonal in the interval \([-1, 1]\) with respect to the weight function \(w(x) = \left( 1 - x^2\right) ^{-1/2}\), and their orthogonality relation is given by

$$\begin{aligned} \left\langle {{T_n},{T_m}} \right\rangle _w = \int _{ - 1}^1 {{T_n}(x)\,{T_m}(x)\,{{\left( 1 - {x^2}\right) }^{ - \frac{1}{2}}}dx = \frac{\pi }{2}{c_n}{\delta _{nm}}} , \end{aligned}$$
(8.5)

where \(c_0 = 2, c_n = 1, n \ge 1\), and \(\delta _{n m}\) is the Kronecker delta function defined by

$$\begin{aligned} {\delta _{nm}} = \left\{ {\begin{array}{ll} 1,&{}\quad n = m,\\ 0,&{}\quad n \ne m. \end{array}} \right. \end{aligned}$$

The roots (aka Chebyshev–Gauss points) of \(T_n(x)\) are given by

$$\begin{aligned} {x_k} = \cos \left( {\frac{{2k - 1}}{{2n}}\pi } \right) ,\quad k = 1, \ldots ,n, \end{aligned}$$
(8.6)

and the extrema (aka CGL points) are defined by

$$\begin{aligned} {x_k} = \cos \left( {\frac{{k \pi }}{n}} \right) ,\quad k = 0,1, \ldots ,n. \end{aligned}$$
(8.7)

The derivative of \(T_n(x)\) can be obtained in terms of Chebyshev polynomials as follows [18]:

$$\begin{aligned} \frac{d}{{dx}}{T_n}(x) = \frac{n}{2}\frac{{{T_{n - 1}}(x) - {T_{n + 1}}(x)}}{{1 - {x^2}}},\quad \left| x \right| \ne 1. \end{aligned}$$
(8.8)

Clenshaw and Curtis [5] showed that a continuous function f(x) with bounded variation on \([-1, 1]\) can be approximated by the truncated series

$$\begin{aligned} ({P_n}f)(x) = \sum \limits _{k = 0}^n {^{''}}{{a_k}\,{T_k}(x)} , \end{aligned}$$
(8.9)

where

$$\begin{aligned} {a_k} = \frac{2}{n}\sum \limits _{j = 0}^n {^{''}}{{f_j}\,{T_k}({x_j})} ,\quad n > 0, \end{aligned}$$
(8.10)

\(x_j, j = 0, \ldots , n\), are the CGL points defined by Eq. (8.7), \(f_j = f(x_j)\, \forall j\), and the summation symbol with double primes denotes a sum with both the first and last terms halved. For a smooth function f, the Chebyshev series (8.9) exhibits exponential convergence faster than any finite power of 1 / n [15].

Appendix 2: Pseudocodes of developed computational algorithms

figure e
figure f
figure g
figure h
figure i
figure j

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elgindy, K.T. Optimization via Chebyshev polynomials. J. Appl. Math. Comput. 56, 317–349 (2018). https://doi.org/10.1007/s12190-016-1076-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12190-016-1076-x

Keywords

Mathematics Subject Classification

Navigation