Abstract

Feature selection is an important technique in pattern recognition. By removing features that have little or no discriminative information, it is possible to improve the predictive performance of classifiers and to reduce the measuring cost of features. In general, feature selection algorithms choose a common feature subset useful for all classes. However, in general, the most contributory feature subsets vary depending on classes relatively to the other classes. In this study, we propose a classifier as a decision tree in which each leaf corresponds to one class and an internal node classifies a sample to one of two class subsets. We also discuss classifier selection in each node.

Keywords

Feature Selection Training Sample Recognition Rate Internal Node Feature Subset 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Devijver, P.A., Kittler, J.: Pattern Recognition: A Statistical Approach. Prentice-Hall, Englewood Cliffs (1982)MATHGoogle Scholar
  2. 2.
    Pudil, P., Novovičová, J., Kittler, J.: Floating search methods in feature selection. Pattern Recognition Letters 15, 1119–1125 (1998)CrossRefGoogle Scholar
  3. 3.
    Somol, P., Novovičová, J., Paclík, P.: Adaptive floating search methods in feature selection. Pattern Recognition Letters 15, 1157–1163 (1998)Google Scholar
  4. 4.
    Ferri, F.J., Pudil, P., Hatef, M., Kittler, J.: Comparative study of techniques for large-scale feature selection. Pattern Recognition in Practice IV, 403–413 (1994)Google Scholar
  5. 5.
    Kudo, M., Sklansky, J.: A comparative evaluation of medium- and large-scale feature selection for pattern classifiers. In: 1st International Workshop on Statistical Techniques in Pattern Recognition, pp. 91–96 (1997)Google Scholar
  6. 6.
    Kudo, M., Sklansky, J.: Classifier-independent feature selection for two-stage feature selection. Advances in Pattern Recognition 1451, 548–554 (1998)CrossRefGoogle Scholar
  7. 7.
    Zongker, D., Jain, A.: Algorithms for feature selection: An evaluation. In: 13th International Conference on Pattern Recognition, vol. 2, pp. 18–22 (1996)Google Scholar
  8. 8.
    Abe, N., Kudo, M.: Non-parametric classifier-independent feature selection. Pattern Recognition 39, 737–746 (2006)CrossRefMATHGoogle Scholar
  9. 9.
    Aoki, K., Kudo, M.: Decision tree using class-dependent feature subsets. In: Caelli, T.M., Amin, A., Duin, R.P.W., Kamel, M.S., de Ridder, D. (eds.) SPR 2002 and SSPR 2002. LNCS, vol. 2396, pp. 761–769. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification: Second Edition. John Wiley & Sons, Chichester (2000)Google Scholar
  11. 11.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regresion Trees. Wadsworth & Brooks Cole Advanced Books & Software (1984)Google Scholar
  12. 12.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Kazuaki Aoki
    • 1
  • Mineichi Kudo
    • 1
  1. 1.Division of Computer Science Graduate School of Information Science and TechnologyHokkaido UniversitySapporoJapan

Personalised recommendations