Advanced Algorithms for 1-D Adaptive Filtering

  • W. Kenneth Jenkins
  • Andrew W. Hull
  • Jeffrey C. Strait
  • Bernard A. Schnaufer
  • Xiaohui Li
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 365)

Abstract

In adaptive filtering practice, the Least Mean Squares (LMS) algorithm is widely used due to its computational simplicity and ease of implementation. However, since its convergence rate depends on the eigenvalue ratio of the autocorrelation matrix of the input noise signal, an LMS adaptive filter converges rather slowly when trained with colored noise as the input signal. However, with the continuing increase of computational power that is currently available in modern integrated signal processors (simply called “DSP chips” throughout the following discussion), adaptive filter designers should be free in the future to use more computationally intensive adaptive filtering algorithms that can perform better than the simple LMS algorithm in real time applications. The objective of this chapter is to explore several of these more computationally intensive, but potentially better performing, adaptive filtering algorithms. In particular, we will consider four classes of algorithms that have received attention by the research community over the last few years: 1) data-reusing LMS algorithms, 2) orthogonalization by pseudo-random (PR) modulation, 3) Gauss-Newton optimization for FIR filters, and 4) block adaptive IIR filters using preconditioned conjugate gradient techniques.

Keywords

Migration Covariance Steam Explosive Assure 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [2.1]
    S. Roy and J. J. Shynk, “Analysis of the Data-Reusing Algorithm,” Proceedings of the 32nd Midwest Symposium Circuits and Systems, Champaign-Urbana, IL, pp. 1127–1130, August 1989.Google Scholar
  2. [2.2]
    S. T. Alexander, “Fast adaptive filters: a geometrical approach,” IEEE ASSP Magazine, Oct. 1986.Google Scholar
  3. [2.3]
    G. C. Goodwin and K.S. Sin, Adaptive Filtering Prediction and Control, Prentice Hall, Englewood Cliffs, NJ, 1984.MATHGoogle Scholar
  4. [2.4]
    D. G. Luenberger, Linear and Nonlinear Programming, 2nd ed, Addison-Wesley, Reading, MA, 1984.MATHGoogle Scholar
  5. [2.5]
    M. Tarrab and A. Feuer, “Convergence and performance analysis of the normalized LMS algorithm with uncorrelated Gaussian data,” IEEE Trans. Inform. Theory, vol. 34, no. 4, pp. 680–691, July 1988.MathSciNetCrossRefMATHGoogle Scholar
  6. [2.6]
    N. J. Bershad, “Analysis of the normalized LMS algorithm with Gaussian inputs,” IEEE Trans. Acoust., Speech & Signal Process., vol. ASSP-34, no. 4, pp. 793–806, Aug. 1986.CrossRefGoogle Scholar
  7. [2.7]
    D. Slock, “On the convergence behavior of the LMS and the normalized LMS algorithms,” IEEE Trans. Signal Processing, vol. 41, pp. 2811–2825, Sep. 1993.CrossRefMATHGoogle Scholar
  8. [2.8]
    W. A. Gardner, “Learning characteristics of stochastic-gradient-descent algorithms: a general study, analysis and critique,” Signal Process., vol. 6, pp. 113–133.Google Scholar
  9. [2.9]
    R. R. Bitmead and R. K. Boel, “On stochastic convergence of infinite products of random matrices and its role in adaptive estimation theory,” Proc. 7th IFAC Symp. Syst. Identif. Param. Estim., York, July 1985, pp. 1223–1228.Google Scholar
  10. [2.10]
    G. K. Boray and M. D. Srinath, “Conjugate gradient techniques for adaptive filtering,” IEEE Trans. Circuits Syst., vol. 39, no. 1, pp. 1–10, Jan. 1992.Google Scholar
  11. [2.11]
    S. Orfanidis, Optimum Signal Processing, an Introduction, Macmillan, New York, 1985.Google Scholar
  12. [2.12]
    B. Noble, Applied Linear Algebra. Wiley, New York, 1976.Google Scholar
  13. [2.13]
    S. W. Golomb, Shift Register Sequences, Holden-Day, San Francisco, 1967.MATHGoogle Scholar
  14. [2.14]
    J. Treichler and B. Agee, “A new approach to multipath correction of constant modulus signals,” IEEE Trans. Acoust. Sp. Sig. Proc, vol. ASSP-31, no. 4, pp. 459–471, April 1983.CrossRefGoogle Scholar
  15. [2.15]
    J. Treichler and M. Larimore, “New processing techniques based on the constant modulus adaptive algorithm,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-33, no. 4, pp. 420–431, April 1985.CrossRefGoogle Scholar
  16. [2.16]
    C. R. Johnson et al., “Averaging analysis of local stability of a real constant modulus algorithm adaptive filter,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-36, no. 6, pp. 900–910, June 1988.CrossRefGoogle Scholar
  17. [2.17]
    J. Smith and B. Friedlander, “Global convergence of the constant modulus algorithm,” Proc. 1985 Int. Conf. Acoust. Sp. Sig. Proc., Tampa, FL.Google Scholar
  18. [2.18]
    M. Larimore and J. Treichler, “Convergence behavior of the constant modulus algorithm,” Proc. 1983 Int. Conf. Acoust. Sp. Sig. Proc., Boston.Google Scholar
  19. [2.19]
    R. Ziemer and W. Trantor, Principles of Communications, Houghton Mifflin, Boston, 1976.Google Scholar
  20. [2.20]
    G. Panda, et al., “A self-orthogonalizing efficient block adaptive filter,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-34, no. 12, pp. 1573–1582, Dec. 1986.CrossRefGoogle Scholar
  21. [2.21]
    G. Clark, S. K. Mitra, and S.R. Parker, “Block implementation of adaptive digital filters,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-29, no. 6, pp. 744–752, June 1981.CrossRefGoogle Scholar
  22. [2.22]
    G. Picchi and G. Prati, “Self-orthogonalizing adaptive equalization in the discrete domain,” IEEE Trans. Comm., vol. COM-32, no. 4, pp. 371–379, Apr. 1984.CrossRefGoogle Scholar
  23. [2.23]
    J. Benesty and P. Duhamel, “A fast exact least mean square adaptive algorithm,” IEEE Trans. Sig. Proc., vol. 40, no. 12, pp. 2904–2920, Dec. 1992.CrossRefMATHGoogle Scholar
  24. [2.24]
    Xiao-Hu Yu and Zhen-Ya He, “Efficient block implementation of exact sequential least-squares problems,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-36, no. 3, pp. 392–399, March 1988.CrossRefGoogle Scholar
  25. [2.25]
    D. Marshall and W. K. Jenkins, “A fast quasi-Newton adaptive filtering algorithm,” IEEE Trans. Sig. Proc., vol. 40, no. 7, pp. 1652–1662, July 1992.CrossRefMATHGoogle Scholar
  26. [2.26]
    T. Kailath, S. Kung, and M. Morf, “Displacement ranks of matrices and linear equations,” J. Mathematical Analysis Applications, vol. 68, pp. 395–407, 1979.MathSciNetCrossRefMATHGoogle Scholar
  27. [2.27]
    T. Furukawa et al., “A fast block adaptive algorithm with conjugate gradient method for system identification,” Control and Computers, vol. 17, no. 3, pp. 75–78, Mar. 1989.Google Scholar
  28. [2.28]
    A. Nehorai and M. Morf, “A relationship between the Levinson algorithm and the conjugate direction model,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-31, no. 2, pp. 506–508, April 1983.CrossRefGoogle Scholar
  29. [2.29]
    G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, MD, 1983.MATHGoogle Scholar
  30. [2.30]
    S. Narayan, A. M. Peterson, and M. Narasimha, “Transform domain LMS algorithm,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-31, no. 3, pp. 609–615, June 1983.CrossRefGoogle Scholar
  31. [2.31]
    T. Ku and C. Kuo, “Design and analysis of Toeplitz preconditioners,” IEEE Trans. Sig. Proc., vol. 40, no. 1, pp. 129–141, Jan. 1992.CrossRefMATHGoogle Scholar
  32. [2.32]
    T. Chan, “An optimal circulant preconditioner for Toeplitz systems,” SIAM J. Sei. Stat. Comput., vol. 9, no. 7, pp. 766–771, July 1988.CrossRefMATHGoogle Scholar
  33. [2.33]
    R. Chan and G. Strang, “Toeplitz equations by conjugate gradients with circulant preconditioner,” SIAM J. Sci. Stat. Comput., vol. 10, no. 1, pp. 104–119, Jan. 1989.MathSciNetCrossRefMATHGoogle Scholar
  34. [2.34]
    A. W. Hull and W. K. Jenkins, “Low computational complexity adaptive algorithms for IIR digital filtering,” Proc. 1991 Int. Conf. Acoust. Sp. Sig. Proc., Toronto, Canada.Google Scholar
  35. [2.35]
    F. Ling, “Convergence characteristics of LMS and LS adaptive algorithms for signals with rank-deficient correlation matrices,” Proc. 1988 Int. Conf. Acoust. Sp. Sig. Proc., New York.Google Scholar
  36. [2.36]
    P. Concus and P. Saylor, A Modified Direct Preconditioner for Indefinite Symmetric Toeplitz System, Technical Report No. UIUCDCS-R-92-1782, Univ. Illinois, 1992.Google Scholar
  37. [2.37]
    J. Shynk, “Adaptive IIR filtering,” IEEE ASSP Magazine, April 1989.Google Scholar
  38. [2.38]
    S. Stearns, “Error surfaces of adaptive recursive filters,” Trans. Acoust. Sp. Sig. Proc., vol. ASSP-29, no. 3, pp. 763–766, June 1981.CrossRefGoogle Scholar
  39. [2.39]
    R. Gitlin and F. Magee, “Self-orthogonalizing adaptive equalization algorithms,” IEEE Trans. Comm., vol. COM-25, no. 7, pp. 666–672, July 1977.CrossRefGoogle Scholar
  40. [2.40]
    T. Kailath, S. Kung, and M. Morf, “Displacement ranks of matrices and linear equations,” J. Mathematical Analysis Applications, vol. 68, pp. 395–407, 1979.MathSciNetCrossRefMATHGoogle Scholar
  41. [2.41]
    A. W. Hull and W. K. Jenkins, “Low computational complexity adaptive algorithms for IIR digital filtering,” Proc. 1991 Int. Conf. Acoust. Sp. Sig. Proc., Toronto, Canada.Google Scholar
  42. [2.42]
    J. J. Shynk, “A complex adaptive algorithm for IIR filtering,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-34, no. 5, pp. 1342–1344, Oct. 1986.CrossRefGoogle Scholar
  43. [2.43]
    P. L. Feintuch, “An adaptive recursive LMS filter,” Proc. IEEE, vol. 64, no. 11, pp. 1622–1624, Nov. 1976.CrossRefGoogle Scholar
  44. [2.44]
    J. J. Shynk, “Adaptive IIR filtering using parallel-form realizations,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-37, no. 4, pp. 519–533, Apr. 1989.CrossRefGoogle Scholar
  45. [2.45]
    W. K. Jenkins and M. Nayeri, “Adaptive filters realized with second order sections,” Proc. 1986 Int. Conf. Acoust. Sp. Sig. Proc., Tokyo, Japan.Google Scholar
  46. [2.46]
    D. Parikh, N. Ahmed, and S. Stearns, “An adaptive lattice algorithm for recursive filters,” IEEE Trans. Acoust. Sp. Sig. Proc., vol. ASSP-28, no. 1, pp. 110–111, Feb. 1988.Google Scholar
  47. [2.47]
    A. W. Hull and W. K. Jenkins, “A comparison of two IIR adaptive filter structures,” Proc. 1988 Int. Symp. Circ. Sys., Espoo, Finland.Google Scholar
  48. [2.48]
    R. King et al., Digital Filtering in One and Two Dimensions. New York: Plenum Press, 1989.Google Scholar
  49. [2.49]
    N. K. Bose, Digital Filters, North-Holland, New York, 1985.MATHGoogle Scholar
  50. [2.50]
    E. A. Robinson, Statistical Communication and Detection, Hafner, New York, 1964.Google Scholar
  51. [2.51]
    T. Brennan, “Bounding adaptive-filter poles using Kharitonov’s theorem,” Proc. 22nd Asilomar Conf. Sig, Sys. Comp., Pacific Grove, CA.Google Scholar

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • W. Kenneth Jenkins
    • 1
  • Andrew W. Hull
    • 1
  • Jeffrey C. Strait
    • 1
  • Bernard A. Schnaufer
    • 1
  • Xiaohui Li
    • 1
  1. 1.University of IllinoisUSA

Personalised recommendations