Constructive Meta-level Feature Selection Method Based on Method Repositories

  • Hidenao Abe
  • Takahira Yamaguchi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3918)


Feature selection is one of key issues related with data pre-processing of classification task in a data mining process. Although many efforts have been done to improve typical feature selection algorithms (FSAs), such as filter methods and wrapper methods, it is hard for just one FSA to manage its performances to various datasets. To above problems, we propose another way to support feature selection procedure, constructing proper FSAs to each given dataset. Here is discussed constructive meta-level feature selection that re-constructs proper FSAs with a method repository every given datasets, de-composing representative FSAs into methods. After implementing the constructive meta-level feature selection system, we show how constructive meta-level feature selection goes well with 32 UCI common data sets, comparing with typical FSAs on their accuracies. As the result, our system shows the highest performance on accuracies and the availability to construct a proper FSA to each given data set automatically.


Feature Selection Feature Subset Feature Selection Method Feature Selection Algorithm Functional Part 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Komori, M., Abe, H., Yamaguchi, T.: A new feature selection method based on dynamic inclemental extension of seed features. In: Proceedings of Knowledge-Based Software Engineering, pp. 291–296 (2002)Google Scholar
  2. 2.
    John, G.H., Kohavi, R., Pfleger, K.: Irrelevant features and the subset selection problem. In: International Conference on Machine Learning, pp. 121–129 (1994)Google Scholar
  3. 3.
    John, G.H.: Enhancements to the data mining process. PhD thesis, Computer Science Department, Stanford University (1997)Google Scholar
  4. 4.
    Liu, H., Motoda, H.: Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic Publishers, Dordrecht (1998)CrossRefMATHGoogle Scholar
  5. 5.
    Hall, M.A.: Benchmarking attribute selection techniques for data mining. Technical Report Working Paper 00/10, Department of Computer Science, University of Waikato (2000)Google Scholar
  6. 6.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence 97, 273–324 (1997)CrossRefMATHGoogle Scholar
  7. 7.
    Kira, K., Rendell, L.: A practical approach to feature selection. In: Sleeman, D., Edwards, P. (eds.) Proceedings of the Ninth International Conference on Machine Learning, pp. 249–256 (1992)Google Scholar
  8. 8.
    Kononenko, I.: Estimating attributes: Analysis and extensions of relief. In: Proceedings of the 1994 European Conference on Machine Learning, pp. 171–182 (1994)Google Scholar
  9. 9.
    Alumualim, H., Dietterich, T.G.: Learning boolean concepts in the presence of many irrelevant features. Artificial Intelligence 69, 279–305 (1994)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Hall, M.: Correlation-based Feature Selection for Machine Learning. PhD thesis, Department of Computer Science, University of Waikato (1998)Google Scholar
  11. 11.
    Quinlan, J.R.: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1992)Google Scholar
  12. 12.
    Langley, P.: Selection of relevant features in machine learning. In: Proceedings of the AAAI Fall Symposium on Relevance (1994)Google Scholar
  13. 13.
    Molina, L.C., Beranche, L., Nebot, A.: Feature selection algorithms: A survey and experimental evaluation. In: Proceedings of the 2002 Internatiolan Conference on Data Mining, pp. 306–313 (2002)Google Scholar
  14. 14.
    Witten, I., Frank, E.: Data Mining: Practical machine learning tools and techniques with Java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar
  15. 15.
    Mierswa, I., Klinkenberg, R., Fischer, S., Ritthoff, O.: A Flexible Platform for Knowledge Discovery Experiments: YALE – Yet Another Learning Environment. In: LLWA 2003 - Tagungsband der GI-Workshop-Woche Lernen - Lehren – Wissen - Adaptivität (2003)Google Scholar
  16. 16.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)MATHGoogle Scholar
  17. 17.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings the Second European Conference on Computational Learning Theory (1995)Google Scholar
  18. 18.
    Wolpert, D.: Stacked generalization. Neural Network 5, 241–260 (1992)CrossRefGoogle Scholar
  19. 19.
    Gama, J., Brazdil, P.: Cascade generalization. Machine Learning 41, 315–343 (2000)CrossRefMATHGoogle Scholar
  20. 20.
  21. 21.
    Bernstein, A., Provost, F.: An intelligent assistant for knowledge discovery process. In: IJCAI 2001 Workshop on Wrappers for Performance Enhancement in KDD (2001)Google Scholar
  22. 22.
    Abe, H., Yamaguchi, T.: Constructive meta-learning with machine learning method repositories. In: Proceedings of the seventeenth International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 502–511 (2004)Google Scholar
  23. 23.
    Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases (1998),
  24. 24.
    Vafaie, H., Jong, K.D.: Genetic algorithms as a tool for feature selection in machine learning. In: Proceedings of the fourth International Conference on Tools with Artificial Intelligence, pp. 200–204 (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hidenao Abe
    • 1
  • Takahira Yamaguchi
    • 2
  1. 1.Department of Medical InformaticsShimane UniversityShimaneJapan
  2. 2.Faculty of Science and TechnologyKeio UniversityYokohamaJapan

Personalised recommendations