Iterative Optimization

  • Leonardo Rey Vega
  • Hernan Rey
Part of the SpringerBriefs in Electrical and Computer Engineering book series (BRIEFSELECTRIC)


In this chapter we introduce iterative search methods for minimizing cost functions, and in particular, the \(J_{\mathrm MSE}\) function. We focus on the methods of Steepest Descent and Newton-Raphson, which belong to the family of deterministic gradient algorithms. Although these methods still require knowledge of the second order statistics as does the Wiener filter, they find this solution iteratively. We also study the convergence of both algorithms and include simulation results to provide more insights on their performance. Understanding their functioning and convergence properties is very important as they will be the basis for the development of stochastic gradient adaptive filters in the next chapter.


  1. 1.
    S. Haykin, Adaptive Filter Theory, 4th edn. (Prentice-Hall, Upper Saddle River, 2002)Google Scholar
  2. 2.
    G.H. Golub, C.F. van Loan, Matrix Computations (The John Hopkins University Press, Baltimore, 1996)Google Scholar
  3. 3.
    A.H. Sayed, Adaptive Filters (John Wiley& Sons, Hoboken, 2008)Google Scholar
  4. 4.
    B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications (John Wiley& Sons, New York, 1998)Google Scholar
  5. 5.
    D. Marquardt, An Algorithm for Least-Squares Estimation of Nonlinear Parameters. SIAM Journal on Applied Mathematics, 11, 431–441 (1963).Google Scholar
  6. 6.
    C.-Y. Chi, C.-C. Feng, C.-H. Chen, C.-Y. Chen, Blind Equalization and System Identification: Batch Processing Algorithms, Performance and Applications (Springer, Berlin, 2006)Google Scholar

Copyright information

© The Author(s) 2013

Authors and Affiliations

  1. 1.School of EngineeringUniversity of Buenos AiresBuenos AiresArgentina
  2. 2.Department of EngineeringUniversity of LeicesterLeicesterUK

Personalised recommendations