Advertisement

Some Remarks on Chosen Methods of Classifier Fusion Based on Weighted Voting

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5572)

Abstract

Multiple Classifier Systems are nowadays one of the most promising directions in pattern recognition. There are many methods of decision making by the ensemble of classifiers. The most popular are methods that have their origin in voting method, where the decision of the common classifier is a combination of individual classifiers’ outputs. This work presents comparative analysis of some classifier fusion methods based on weighted voting of classifiers’ responses and combination of classifiers’ discriminant functions. We discus which of presented methods could produce classifier better than Oracle one. Some results of computer experiments carried out on benchmark and computer generated data which confirmed our studies are presented also.

Keywords

Discriminant Function Class Label Class Number Weighted Vote Common Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html Google Scholar
  2. 2.
    Chow, C.K.: Statistical independence and threshold functions. IEEE Trans. on Electronic Computers EC-16, 66–68 (1965)CrossRefzbMATHGoogle Scholar
  3. 3.
    Duda, R.O., et al.: Pattern Classification. Wiley Interscience, Hoboken (2001)zbMATHGoogle Scholar
  4. 4.
    Duin, R.P.W., Tax, D.M.J.: Experiments with Classifier Combining Rules. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 16–29. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  5. 5.
    Duin, R.P.W.: The Combining Classifier: to Train or Not to Train? In: Proc. of the ICPR 2002, Quebec City (2002)Google Scholar
  6. 6.
    Duin, R.P.W., et al.: PRTools4, A Matlab Toolbox for Pattern Recognition, Delft University of Technology (2004)Google Scholar
  7. 7.
    Fumera, G., Roli, F.: A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems. IEEE Trans.on PAMI 27(6), 942–956 (2005)CrossRefGoogle Scholar
  8. 8.
    Hansen, L.K., Salamon, P.: Neural Networks Ensembles. IEEE Trans. on PAMI 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  9. 9.
    Hashem, S.: Optimal linear combinations of neural networks. Neural Networks 10(4), 599–614 (1997)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Inoue, H., Narihisa, H.: Optimizing a Multiple Classifier Systems. In: Ishizuka, M., Sattar, A. (eds.) PRICAI 2002. LNCS, vol. 2417, pp. 285–294. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Jozefczyk, J.: Determination of optimal recognition algorithm in two-level systems. Pattern recognition Letters 4, 413–420 (1986)CrossRefGoogle Scholar
  12. 12.
    Jain, A.K., Duin, P.W., Mao, J.: Statistical Pattern Recognition: A Review. IEEE Trans. on PAMI 22(1), 4–37 (2000)CrossRefGoogle Scholar
  13. 13.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the 14th Int.Joint Conf. on Artificial Intell., San Mateo, pp. 1137–1143 (1995)Google Scholar
  14. 14.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognition 34, 299–314 (2001)CrossRefzbMATHGoogle Scholar
  15. 15.
    Kuncheva, L.I., Whitaker, C.J., Shipp, C.A., Duin, R.P.W.: Limits on the Majority Vote Accuracy in Classier Fusion. Pattern Analysis and Applications 6, 22–31 (2003)CrossRefzbMATHGoogle Scholar
  16. 16.
    Kuncheva, L.I.: Combining pattern classifiers: Methods and algorithms. Wiley, Chichester (2004)CrossRefzbMATHGoogle Scholar
  17. 17.
    Raudys, S.: Trainable fusion rules. I. Large sample size case. Neural Networks 19, 1506–1516 (2006)CrossRefzbMATHGoogle Scholar
  18. 18.
    Raudys, S.: Trainable fusion rules. II. Small sample-size effects. Neural Networks 19, 1517–1527 (2006)CrossRefzbMATHGoogle Scholar
  19. 19.
    Tumer, K., Ghosh, J.: Analysis of Decision Boundaries in Linearly Combined Neural Classifiers. Pattern Recognition 29, 341–348 (1996)CrossRefGoogle Scholar
  20. 20.
    Van Erp, M., Vuurpijl, L.G., Schomaker, L.R.B.: An overview and comparison of voting methods for pattern recognition. In: Proc. of IWFHR.8, Canada, pp. 195–200 (2002)Google Scholar
  21. 21.
    Kuncheva, L.: Using degree of consensus in two-level fuzzy pattern recognition. European Journal of Operational Research 80, 365–370 (1995)CrossRefGoogle Scholar
  22. 22.
    Wozniak, M.: Experiments on linear combiners. In: Pietka, E., Kawa, J. (eds.) Information technologies in biomedicine, pp. 445–452. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  23. 23.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition. IEEE Trans. on SMC (3), 418–435 (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  1. 1.Chair of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations