Skip to main content
Log in

On the effect of calibration in classifier combination

Applied Intelligence Aims and scope Submit manuscript

Abstract

A general approach to classifier combination considers each model as a probabilistic classifier which outputs a class membership posterior probability. In this general scenario, it is not only the quality and diversity of the models which are relevant, but the level of calibration of their estimated probabilities as well. In this paper, we study the role of calibration before and after classifier combination, focusing on evaluation measures such as MSE and AUC, which better account for good probability estimation than other evaluation measures. We present a series of findings that allow us to recommend several layouts for the use of calibration in classifier combination. We also empirically analyse a new non-monotonic calibration method that obtains better results for classifier combination than other monotonic calibration methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Notes

  1. This distribution has been used to model probabilities for binary cases in the probit model or in truncated regression [1].

  2. These results are averages over datasets. Although the measures for different datasets are not commensurable, we use the means to ease the presentation of results. Nonetheless, the individual results for the 30 datasets are still used for the statistical tests.

  3. We tried the six weighting methods shown in Table 2, but the best results were obtained with WCGINI and WCIMSE, so, in what follows, we only show these results.

  4. Apart from the magnitude in the results (values are generally better without a random classifier, as expected), the relative differences are similar, so the conclusions that can be drawn from one case (without random classifier) are similar to those that can be drawn from the other case (with a random classifier). From hereon, we will only show the results including the random classifier.

References

  1. Amemiya T (1973) Regression analysis when the dependent variable is truncated normal. Econometrica 41(6):997–1016

    Article  MathSciNet  MATH  Google Scholar 

  2. Ayer M, Brunk H, Ewing G, Reid W, Silverman E (1955) An empirical distribution function for sampling with incomplete information. Ann Math Stat 5:641–647

    Article  MathSciNet  Google Scholar 

  3. Bella A, Ferri C, Hernandez-Orallo J, Ramirez-Quintana M (2009) Calibration of machine learning models. In: Handbook of research on machine learning applications. IGI Global, Hershey, pp 128–146

    Chapter  Google Scholar 

  4. Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana M (2009) Similarity-binning averaging: a generalisation of binning calibration. In: Intelligent data engineering and automated learning—IDEAL 2009. Lecture notes in computer science, vol 5788. Springer, Berlin/Heidelberg, pp 341–349

    Chapter  Google Scholar 

  5. Bennett PN (2006) Building reliable metaclassifiers for text learning. PhD thesis, Carnegie Mellon University

  6. Bennett PN, Dumais ST, Horvitz E (2005) The combination of text classifiers using reliability indicators. Inf Retr 8(1):67–98

    Article  Google Scholar 

  7. Blake C, Merz C (1998) UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html

  8. Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    MathSciNet  MATH  Google Scholar 

  9. Brier G (1950) Verification of forecasts expressed in terms of probabilities. Mon Weather Rev 78:1–3

    Article  Google Scholar 

  10. Brümmer N (2010) Measuring, refining and calibrating speaker and language information extracted from speech. PhD thesis, University of Stellenbosch

  11. Canuto A, Santos A, Vargas R (2011) Ensembles of artmap-based neural networks: an experimental study. Appl Intell 35:1–17

    Article  Google Scholar 

  12. Caruana R, Munson A, Mizil AN (2006) Getting the most out of ensemble selection. In: ICDM ’06: proceedings of the sixth international conference on data mining. IEEE Computer Society, Washington, pp 828–833

    Chapter  Google Scholar 

  13. Caruana R, Niculescu-Mizil A (2004) Data mining in metric space: an empirical analysis of supervised learning performance criteria. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’04. ACM Press, New York, pp 69–78

    Google Scholar 

  14. Cohen I, Goldszmidt M (2004) Properties and benefits of calibrated classifiers. In: Proceedings of the 8th European conference on principles and practice of knowledge discovery in databases, PKDD ’04. Springer, Berlin, pp 125–136

    Google Scholar 

  15. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  16. Dietterich TG (2000) Ensemble methods in machine learning. In: Proceedings of the first international workshop on multiple classifier systems, MCS ’00. Springer, London, pp 1–15

    Chapter  Google Scholar 

  17. Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40:139–157

    Article  Google Scholar 

  18. Fahim M, Fatima I, Lee S, Lee Y (2012) Eem: evolutionary ensembles model for activity recognition in smart homes. Appl Intell, 1–11. doi:10.1007/s10489-012-0359-7

  19. Ferri C, Flach P, Hernández-Orallo J (2004) Delegating classifiers. In: Proceedings of the twenty-first international conference on machine learning, ICML ’04. ACM Press, New York, pp 37–45

    Chapter  Google Scholar 

  20. Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recognit Lett 30:27–38

    Article  Google Scholar 

  21. Ferri C, Hernández-Orallo J, Salido M (2003) Volume under the ROC surface for multi-class problems. Exact computation and evaluation of approximations. In: Proceedings of 14th European conference on machine learning, pp 108–120

    Google Scholar 

  22. Flach P, Blockeel H, Ferri C, Hernández-Orallo J, Struyf J (2003) Decision support for data mining: an introduction to ROC analysis and its applications. In: Data mining and decision support: integration and collaboration. Kluwer Academic, Boston, pp 81–90

    Chapter  Google Scholar 

  23. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: International conference on machine learning, pp 148–156

    Google Scholar 

  24. Gama J, Brazdil P (2000) Cascade generalization. Mach Learn 41:315–343

    Article  MATH  Google Scholar 

  25. Garczarek U (2002) Classification rules in standardized partition spaces. PhD thesis, Universitat Dortmund

  26. Gebel M (2009) Multivariate calibration of classifier scores into the probability space. PhD thesis, University of Dortmund

  27. Hand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn 45:171–186

    Article  MATH  Google Scholar 

  28. Hoeting JA, Madigan D, Raftery AE, Volinsky CT (1999) Bayesian model averaging: a tutorial. Stat Sci 14(4):382–417

    Article  MathSciNet  MATH  Google Scholar 

  29. Khor K, Ting C, Phon-Amnuaisuk S (2012) A cascaded classifier approach for improving detection rates on rare attack categories in network intrusion detection. Appl Intell 36:320–329

    Article  Google Scholar 

  30. Kuncheva LI (2002) A theoretical study on six classifier fusion strategies. IEEE Trans Pattern Anal Mach Intell 24:281–286

    Article  Google Scholar 

  31. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley-Interscience, New York

    Book  MATH  Google Scholar 

  32. Kuncheva LI (2005) Diversity in multiple classifier systems. Inf Fusion 6(1):3–4

    Article  MathSciNet  Google Scholar 

  33. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207

    Article  MATH  Google Scholar 

  34. Lee H, Kim E, Pedrycz W (2012) A new selective neural network ensemble with negative correlation. Appl Intell, 1–11. doi:10.1007/s10489-012-0342-3

  35. Maudes J, Rodríguez J, García-Osorio C, Pardo C (2011) Random projections for linear svm ensembles. Appl Intell 34:347–359

    Article  Google Scholar 

  36. Murphy AH (1972) Scalar and vector partitions of the probability score: part II. n-State situation. J Appl Meteorol 11:1182–1192

    Google Scholar 

  37. Platt JC (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in large margin classifiers. MIT Press, Boston, pp 61–74

    Google Scholar 

  38. Raftery AE, Gneiting T, Balabdaoui F, Polakowski M (2005) Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Rev, p 133

  39. Rifkin R, Klautau A (2004) In defense of one-vs-all classification. J Mach Learn Res 5:101–141

    MathSciNet  MATH  Google Scholar 

  40. Robertson T, Wright FT, Dykstra RL (1988) Order restricted statistical inference. Wiley, New York

    MATH  Google Scholar 

  41. Souza L, Pozo A, Rosa J, Neto A (2010) Applying correlation to enhance boosting technique using genetic programming as base learner. Appl Intell 33:291–301

    Article  Google Scholar 

  42. Tulyakov S, Jaeger S, Govindaraju V, Doermann D (2008) Review of classifier combination methods. In: Marinai HFS (ed) Studies in computational intelligence: machine learning in document analysis and recognition. Springer, Berlin, pp 361–386

    Chapter  Google Scholar 

  43. Verma B, Hassan S (2011) Hybrid ensemble approach for classification. Appl Intell 34:258–278

    Article  Google Scholar 

  44. Wang C, Hunter A (2010) A low variance error boosting algorithm. Appl Intell 33:357–369

    Article  Google Scholar 

  45. Witten IH, Frank E (2002) Data mining: practical machine learning tools and techniques with java implementations. SIGMOD Rec 31:76–77

    Article  Google Scholar 

  46. Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259

    Article  Google Scholar 

  47. Zadrozny B, Elkan C (2002) Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’02. ACM Press, New York, pp 694–699

    Chapter  Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewers for their comments, which have helped to improve this paper significantly. This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022, COST action IC0801 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, and the REFRAME project granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the Ministerio de Economía y Competitividad in Spain.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Bella.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bella, A., Ferri, C., Hernández-Orallo, J. et al. On the effect of calibration in classifier combination. Appl Intell 38, 566–585 (2013). https://doi.org/10.1007/s10489-012-0388-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-012-0388-2

Keywords

Navigation