Nonlinear Adaptive Filtering with MEE, MCC, and Applications

  • Deniz Erdogmus
  • Rodney Morejon
  • Weifeng Liu
Part of the Information Science and Statistics book series (ISS)


Our emphasis on the linear model in Chapter 4 was only motivated by simplicity and pedagogy. As we have demonstrated in the simple case studies, under the linearity and Gaussianity conditions, the final solution of the MEE algorithms was basically equivalent to the solution obtained with the LMS. Because the LMS algorithm is computationally simpler and better understood, there is really no advantage to use MEE in such cases.


Mean Square Error Line Search Performance Surface Kernel Size Impulsive Noise 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 2.
    Ahmad I., Lin P., A nonparametric estimation of the entropy for absolutely continuous distributions, IEEE Trans. on Inf. Theor., 22:372–375, 1976.CrossRefMATHMathSciNetGoogle Scholar
  2. 20.
    Battiti R., First and second order methods for learning: Between steepest descent and Newton’s method, Neural Comput., 4(2):141–166, 1992.CrossRefGoogle Scholar
  3. 31.
    Benedetto S., Biglieri E., Nonlinear equalization of digital satellite channels, IEEE J. Select. Areas Commun., 1:57–62, Jan 1983.CrossRefGoogle Scholar
  4. 38.
    Bishop C., Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 1995.Google Scholar
  5. 87.
    Erdogmus D., Principe J.C., An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems, IEEE Trans. Signal Process., 50(7):1780–1786, 2002.CrossRefMathSciNetGoogle Scholar
  6. 88.
    Erdogmus D., J. Principe, Generalized information potential for adaptive systems training, IEEE Trans. Neural Netw., 13(5):1035–1044, 2002.CrossRefGoogle Scholar
  7. 113.
    Glass L., Mackey M., From clock to chaos, Princeton University Press, Princeton, NJ, 1998.Google Scholar
  8. 128.
    Han S., Rao S., Erdogmus D., Principe J., A minimum error entropy algorithm with self adjusting stepsize (MEE-SAS), Signal Process. 87:2733–2745.Google Scholar
  9. 140.
    Haykin S., Principe J., Dynamic modeling with neural networks, in IEEE Signal Process. Mag., 15(3):66–72, 1998.CrossRefGoogle Scholar
  10. 201.
    Liu W., Pokharel P., Principe J., Correntropy: Properties and applications in non Gaussian signal processing, IEEE Trans. Sig. Proc., 55(11):5286–5298, 2007.CrossRefMathSciNetGoogle Scholar
  11. 204.
    Lorenz E., Deterministic non-periodic flow, J. Atmospheric Sci., 20:130–141, 1963.CrossRefGoogle Scholar
  12. 221.
    Moller M., A scaled conjugate gradient algorithm for fast supervised learning, Neural Netw., 6:525–533, 1993.CrossRefGoogle Scholar
  13. 223.
    Morejon R., An information theoretic approach to sonar automatic target recognition, Ph.D. dissertation, University of Florida, Spring 2003Google Scholar
  14. 224.
    Morejon R., Principe J., Advanced parameter search algorithms for information-theoretic learning, IEEE Trans. Neural Netw. Special Issue Inf. Theor. Learn., 15(4):874–884, 2004.Google Scholar
  15. 253.
    Principe J., Euliano N., Lefebvre C., Neural Systems: Fundamentals through Simulations, CD-ROM textbook, John Wiley, New York, 2000.Google Scholar
  16. 255.
    Proakis J., Digital Communications, Prentice-Hall, Englewood Clifts, NJ,1988.Google Scholar
  17. 267.
    Riedmiller M., Braun H., A direct adaptive method for faster backpropagation learning: The RPROP Algorithm, in Proc. of the IEEE Intl. Conf. on Neural Networks, San Francisco, 1993, pp. 586–591.Google Scholar
  18. 277.
    Rumelhart D., McClelland J., Eds, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge, MA, 1986.Google Scholar
  19. 279.
    Sands N., Cioffi J., Nonlinear channel models for digital magnetic recording, IEEE Trans. Magnetics, 29(6):3996–3998, Nov 1993.CrossRefGoogle Scholar
  20. 280.
    Sands N., Cioffi J., An improved detector for channels with nonlinear intersymbol interference, Proc. Intl. Conf. on Communications, vol 2, pp 1226–1230, 1994.Google Scholar
  21. 281.
    Santamaria I., Erdogmus D., Principe J.C., Entropy minimization for supervised communication channel equalization, IEEE Trans. Signal Process., 50(5):1184–1192, 2002.CrossRefGoogle Scholar
  22. 285.
    Sheppard A., Second Order Methods for Neural Networks, Springer, London, 1997.Google Scholar
  23. 331.
    Werbos P., Beyond regression: New tools for prediction and analysis in the behavioral sciences, Ph.D. Thesis, Harvard University, Cambridge, 1974.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Deniz Erdogmus
    • 1
  • Rodney Morejon
    • 1
  • Weifeng Liu
    • 1
  1. 1.Dept. Electrical Engineering & Biomedical EngineeringUniversity of FloridaGainesvilleUSA

Personalised recommendations