Advertisement

Boosting Feature Selection

  • D. B. Redpath
  • K. Lebart
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3686)

Abstract

It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the Adaboost FS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of Adaboost FS is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting Adaboost Reg [1] with a floating feature search for each base classifier.

Adaboost FS is compared using four benchmark data sets to Adaboost All , which uses all features and to Adaboost RSM , which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that Adaboost FS achieves a lower error rate and higher diversity than Adaboost All , and achieves a lower error rate and comparable diversity to Adaboost RSM . However, over the other methods Adaboost FS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble.

Keywords

Feature Selection Feature Subset Base Classifer Benchmark Dataset Ensemble Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for adaboost. Machine Learning 42, 287–320 (2001)zbMATHCrossRefGoogle Scholar
  2. 2.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proc. 13th International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  3. 3.
    Schapire, R., Freund, Y., Bartlett, P., Lee, W.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annuals of Statistics, 1651–1686 (1998)Google Scholar
  4. 4.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. Information Fusion 6, 5–20 (2005)CrossRefGoogle Scholar
  5. 5.
    Quinlan, J.R.: Bagging, boosting and c4.5. In: Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 725–730 (1996)Google Scholar
  6. 6.
    Schapire, R., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37, 297–336 (1999)zbMATHCrossRefGoogle Scholar
  7. 7.
    Tieu, K., Viola, P.: Boosting image retrieval. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 228–235 (2000)Google Scholar
  8. 8.
    Ho, T.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 832–844 (1998)CrossRefGoogle Scholar
  9. 9.
    Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition 36, 1291–1302 (2003)zbMATHCrossRefGoogle Scholar
  10. 10.
    Cunningham, P., Carney, J.: Diversity versus quality in classification ensembles based on feature selection. In: Lopez de Mantaras, R., Plaza, E. (eds.) ECML 2000. LNCS (LNAI), vol. 1810, pp. 109–116. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  11. 11.
    Guerra-Salcedo, C., Whitley, D.: Feature selection mechanisms for ensemble creation: a genetic search perspective. In: AAAI 1999 (1999)Google Scholar
  12. 12.
    Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Diversity in search strategies for ensemble feature selection. Information Fusion 6, 83–98 (2005)CrossRefGoogle Scholar
  13. 13.
    Günter, S., Bunke, H.: Feature selection algorithms for the generation of multiple classifier systems and their application to handwritten word recognition. Pattern Recognition 25, 1323–1336 (2004)CrossRefGoogle Scholar
  14. 14.
    Kudo, M., Sklansky, J.: Comparison of algorithms that select features for pattern classifiers. Pattern Recognition 33, 25–41 (2000)CrossRefGoogle Scholar
  15. 15.
    Pudil, P., Novovivčová, J., Kittler, J.: Floating search methods in feature selection. Pattern Recognition Letters 15, 1119–1125 (1994)CrossRefGoogle Scholar
  16. 16.
    Blake, C., Merz, C.: UCI repository of machine learning databases (1998)Google Scholar
  17. 17.
    Feiss, J.: Statistical methods for rates and proportions (1981)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • D. B. Redpath
    • 1
  • K. Lebart
    • 1
  1. 1.ECE, School of EPSHeriot-Watt UniversityEdinburghUK

Personalised recommendations