Advertisement

Decision Tree Using Class-Dependent Feature Subsets

  • Kazuaki Aoki
  • Mineichi Kudo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2396)

Abstract

In pattern recognition, feature selection is an important technique for reducing the measurement cost of features or for improving the performance of classifiers, or both. Removal of features with no discriminative information is effective for improving the precision of estimated parameters of parametric classifiers. Many feature selection algorithms choose a feature subset that is useful for all classes in common. However, the best feature subset for separating one group of classes from another may depend on groups. In this study, we investigate the effectiveness of choosing feature subsets depending on groups of classes (class-dependent features), and propose a classifier system that is built as a decision tree in which nodes have class-dependent feature subsets.

Keywords

Decision Tree Feature Selection Training Sample Recognition Rate Feature Subset 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    P. A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Prentice-Hall, 1982.Google Scholar
  2. 2.
    P. Pudil, J. Novovičová and J. Kittler, Floating Search Methods in Feature Selection. Pattern Recognition Letters, 15(1998), 1119–1125.CrossRefGoogle Scholar
  3. 3.
    P. Somol, P. Pudil, J. Novovičová and P. Paclík, Adaptive Floating Search Methods in Feature Selection. Pattern Recognition Letters, 20(1999), 1157–1163.CrossRefGoogle Scholar
  4. 4.
    F. J. Ferri, P. Pudil, M. Hatef and J. Kittler, Comparative Study of Techniques for Large-Scale Feature Selection. Pattern Recognition in Practice IV(1994), 403–413.Google Scholar
  5. 5.
    M. Kudo and J. Sklansky, A Comparative Evaluation of Medium-and Large-scale Feature Selectors for Pattern Classifiers. 1st International Workshop on Statistical Techniques in Pattern Recognition(1997), 91–96.Google Scholar
  6. 6.
    M. Kudo and J. Sklansky, Classifier-Independent Feature Selection for Two-stage Feature Selection. Advances in Pattern Recognition, 1451(1998), 548–554.CrossRefGoogle Scholar
  7. 7.
    D. Zongker and A. Jain, Algorithms for Feature Selection: An Evaluation. 13th International Conference on Pattern Recognition, 2(1996), 18–22.CrossRefGoogle Scholar
  8. 8.
    I. S. Oh, J. S. Lee and C. Y. Suen, Analysis of Class Separation and Combination of Class-Dependent Features for Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(1999), 1089–1094.CrossRefGoogle Scholar
  9. 9.
    I. S. Oh, J. S. Lee, K. C. Hong and S. M. Choi, Class Expert Approach to Handwritten Numerical Recognition. Proceedings of IWFHR’ 96(1996), 35–40.Google Scholar
  10. 10.
    R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification: Second Edition. John Wiley & Sons, 2000.Google Scholar
  11. 11.
    L. Breiman, J. H. Friedman, R. A. Olshen and C. J. Stone, Classification and Regression Trees. Wadsworth & Brooks / Cole Advanced Books & Software, 1984.Google Scholar
  12. 12.
    P. M. Murphy and D. W. Aha, UCI Repsitory of Machine Learning Databases [Machine-readable data repository]. University of California Irnive, Department of Informaiton and Computations Science(1996).Google Scholar
  13. 13.
    M. Kudo and M. Shimbo, Feature Selection Based on the Structural Indices of Categories. Pattern Recognition, 26(1993), 891–901.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Kazuaki Aoki
    • 1
  • Mineichi Kudo
    • 1
  1. 1.Division of Systems and Information Engineering, Graduate School of EngineeringHokkaido UniversitySapporoJapan

Personalised recommendations