Effective Input Variable Selection for Function Approximation

  • L. J. Herrera
  • H. Pomares
  • I. Rojas
  • M. Verleysen
  • A. Guilén
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)


Input variable selection is a key preprocess step in any I/O modelling problem. Normally, better generalization performance is obtained when unneeded parameters coming from irrelevant or redundant variables are eliminated. Information theory provides a robust theoretical framework for performing input variable selection thanks to the concept of mutual information. Nevertheless, for continuous variables, it is usually a more difficult task to determine the mutual information between the input variables and the output variable than for classification problems. This paper presents a modified approach for variable selection for continuous variables adapted from a previous approach for classification problems, making use of a mutual information estimator based on the k-nearest neighbors.


Feature Selection Mutual Information Variable Selection Probability Density Function Variable Selection Method 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bonnlander, B.V., Weigend, A.S.: Selecting input variables using mutual information and nonparametric density estimation. In: Proc. of the ISANN 2004, Taiwan, pp. 42–50 (1994)Google Scholar
  2. 2.
    Schoelkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)Google Scholar
  3. 3.
    Haykin, S.: Neural Networks. Prentice Hall, New Jersey (1998)Google Scholar
  4. 4.
    Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, J., Vandewalle, B.: Least Squares Support Vector Machines. World Scientific, Singapore (2002)MATHCrossRefGoogle Scholar
  5. 5.
    Herrera, L.J., Pomares, H., Rojas, I., Valenzuela, O., Prieto, A.: TaSe, a Taylor Series Based Fuzzy System Model that Combines Interpretability and Accuracy. Fuzzy Sets and Systems 153(3), 403–427 (2005)MATHMathSciNetGoogle Scholar
  6. 6.
    Koller, D., Sahami, M.: Toward Optimal Feature Selection. In: Proc. Int. Conf. on Machine Learning, pp. 284–292 (1996)Google Scholar
  7. 7.
    Rossi, F., Lendasse, A., François, D., Wertz, V., Verleysen, M.: Mutual Information for the selection of relevant variables in spectrometric nonlinear modeling. Chem. and Int. Lab. Syst. (2005) (In Press)Google Scholar
  8. 8.
    Benoudjit, N., François, D., Meurens, M., Verleysen, M.: Spectrophotometric variable selection by mutual information. Chem. and Int. Lab. Syst. 74, 243–251 (2004)CrossRefGoogle Scholar
  9. 9.
    Kraskov, A., Stögbauer, H., Grassberger, P.: Estimating mutual information. Phys.Rev. E 69, 66138 (2004)Google Scholar
  10. 10.
  11. 11.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)MATHCrossRefGoogle Scholar
  12. 12.
    Harald, S., Alexander, K., Sergey, A.A., Peter, G.: Least dependent component analysis based on mutual information. Phys. Rev. E 70, 66123 (2004)Google Scholar
  13. 13.
    Sorjamaa, A., Hao, J., Lendasse, A.: Mutual Information and k-Nearest Neighbors Approxi-mator for Time Series Prediction. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 553–558. Springer, Heidelberg (2005)Google Scholar
  14. 14.
  15. 15.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, CA (1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • L. J. Herrera
    • 1
  • H. Pomares
    • 1
  • I. Rojas
    • 1
  • M. Verleysen
    • 2
  • A. Guilén
    • 1
  1. 1.Computer Architecture and Computer Technology DepartmentUniversity of GranadaGranadaSpain
  2. 2.Machine Learning GroupLouvain la NeuveBelgium

Personalised recommendations