Skip to main content

Decision Committee Learning with Dynamic Integration of Classifiers

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1884))

Abstract

Decision committee learning has demonstrated spectacular success in reducing classification error from learned classifiers. These techniques develop a classifier in the form of a committee of subsidiary classifiers. The combination of outputs is usually performed by majority vote. Voting, however, has a shortcoming. It is unable to take into account local expertise. When a new instance is difficult to classify, then the average classifier will give a wrong prediction, and the majority vote will more probably result in a wrong prediction. Instead of voting, dynamic integration of classifiers can be used, which is based on the assumption that each committee member is best inside certain subareas of the whole feature space. In this paper, the proposed dynamic integration technique is evaluated with AdaBoost and Bagging, the decision committee approaches which have received extensive attention recently. The comparison results show that boosting and bagging have often significantly better accuracy with dynamic integration of classifiers than with simple voting.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aivazyan, S.A.: Applied Statistics: Classification and Dimension Reduction. Finance and Statistics, Moscow (1989).

    Google Scholar 

  2. Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning, Vol. 36 (1999) 105–139.

    Article  Google Scholar 

  3. Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases [http://www.ics.uci.edu/~mlearn/MLRepository.html]. Dep-t of Information and CS, Un-ty of California, Irvine CA (1998).

    Google Scholar 

  4. Breiman, L.: Bagging Predictors. Machine Learning, Vol. 24 (1996) 123–140.

    MATH  MathSciNet  Google Scholar 

  5. Chan, P., Stolfo, S.: On the Accuracy of Meta-Learning for Scalable Data Mining. Intelligent Information Systems, Vol. 8 (1997) 5–28.

    Article  Google Scholar 

  6. Cost, S., Salzberg, S.: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. Machine Learning, Vol. 10, No. 1 (1993) 57–78.

    Google Scholar 

  7. Dietterich, T.G.: Machine Learning Research: Four Current Directions. AI Magazine, Vol. 18, No. 4 (1997) 97–136.

    Google Scholar 

  8. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R.: Advances in Knowledge Discovery and Data Mining. AAAF MIT Press (1997).

    Google Scholar 

  9. Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. In: Proc. 2nd European Conf. on Computational Learning Theory, Springer-Verlag (1995) 23–37.

    Google Scholar 

  10. Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. In: C. Mellish (ed.), Proceedings of IJCAI’95, Morgan Kaufmann (1995).

    Google Scholar 

  11. Kohavi, R., Sommerfield, D., Dougherty, J.: Data Mining Using MLC++: A Machine Learning Library in C++. Tools with Artificial Intelligence, IEEE CS Press (1996) 234–245.

    Google Scholar 

  12. Koppel, M., Engelson, S.P.: Integrating Multiple Classifiers by Finding their Areas of Expertise. In: AAAI-96 Workshop On Integrating Multiple Learning Models(1996) 53–58.

    Google Scholar 

  13. Merz, C: Dynamical Selection of Learning Algorithms. In: D. Fisher, H.-J. Lenz (eds.), Learning from Data, Artificial Intelligence and Statistics, Springer-Verlag, NY (1996).

    Google Scholar 

  14. Merz, C.J.: Combining Classifiers Using Correspondence Analysis. In: M.I. Jordan, M.J. Kearns, S.A. Solla (eds.), Advances in Neural Information Processing Systems 10, MIT Press (1998).

    Google Scholar 

  15. Puuronen, S., Terziyan, V., Tsymbal, A.: A Dynamic Integration Algorithm for an Ensemble of Classifiers. In:Z.W. Ras, A. Skowron (eds.), Foundations of Intelligent Systems: ISMIS’99, Lecture Notes in AI, Vol. 1609, Springer-Verlag, Warsaw (1999) 592–600.

    Chapter  Google Scholar 

  16. Puuronen, S., Tsymbal, A., Terziyan, V.: Distance Functions in Dynamic Integration of Data Mining Techniques. In: B.V. Dasarathy (ed.), Data Mining and Knowledge Discovery: Theory, Tools and Technology II, Proceedings of SPIE, Vol. 4057, USA (2000) 22–32.

    Google Scholar 

  17. Quinlan, J.R.: C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA (1993).

    Google Scholar 

  18. Schapire, R.E.: A Brief Introduction to Boosting. In: Proc. 16th Int. Joint Conf. on Artificial Intelligence (1999).

    Google Scholar 

  19. Schapire, R.E.: The Strength of Weak Learnability. Machine Learning, Vol. 5, No. 2 (1990) 197–227.

    Google Scholar 

  20. Skalak, D.B.: Combining Nearest Neighbor Classifiers. Ph.D. Thesis, Dept. of Computer Science, University of Massachusetts, Amherst, MA (1997).

    Google Scholar 

  21. Skrypnik, I., Terziyan, V., Puuronen, S., Tsymbal, A.: Learning Feature Selection for Medical Databases. In: Proc. 12th IEEE Symp. on Computer-Based Medical Systems CBMS’99, IEEE CS Press, Stamford, CT (1999) 53–58.

    Chapter  Google Scholar 

  22. Terziyan, V., Tsymbal, A., Puuronen, S.: The Decision Support System for Telemedicine Based on Multiple Expertise. Int. J. of Medical Informatics, Vol. 49, No. 2 (1998) 217–229.

    Article  Google Scholar 

  23. Tsymbal, A., Puuronen, S., Terziyan, V.: Advanced Dynamic Selection of Diagnostic Methods. In: Proceedings 11th IEEE Symp. on Computer-Based Medical Systems CBM’98, IEEE CS Press, Lubbock, Texas, June (1998) 50–54.

    Google Scholar 

  24. Tsymbal, A., Puuronen, S., Terziyan, V.: Arbiter Meta-Learning with Dynamic Selection of Classifiers and its Experimental Investigation. In: J. Eder, I. Rozman, T. Welzer (eds.), Advances in Databases and Information Systems: 3rd East European Conference ADBI’99, LNCS, Vol. 1691, Springer-Verlag, Maribor (1999) 205–217.

    Google Scholar 

  25. Webb, G.I.: MultiBoosting: A Technique for Combining Boosting and Wagging. Machine Learning (2000) in press.

    Google Scholar 

  26. Wolpert, D.: Stacked Generalization. Neural Networks, Vol. 5 (1992) 241–259.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tsymbal, A. (2000). Decision Committee Learning with Dynamic Integration of Classifiers. In: Štuller, J., Pokorný, J., Thalheim, B., Masunaga, Y. (eds) Current Issues in Databases and Information Systems. ADBIS DASFAA 2000 2000. Lecture Notes in Computer Science, vol 1884. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44472-6_21

Download citation

  • DOI: https://doi.org/10.1007/3-540-44472-6_21

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67977-6

  • Online ISBN: 978-3-540-44472-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics