Evidential joint calibration of binary SVM classifiers


In order to improve overall performance with respect to a classification problem, a path of research consists in using several classifiers and to fuse their outputs. To perform this fusion, some approaches merge the classifier outputs using a rule of combination. This requires that the outputs be made comparable beforehand, which is usually done thanks to a probabilistic calibration of each classifier. The fusion can also be performed by concatenating the classifier outputs into a vector and applying a joint probabilistic calibration to this vector. Recently, extensions of probabilistic calibration techniques of an individual classifier have been proposed using evidence theory, in order to better represent the uncertainties inherent to the calibration process. In this paper, we adapt this latter idea to joint probabilistic calibration techniques, leading to evidential versions of joint calibration techniques. In addition, our proposal was tested on generated and real datasets and the results showed that it either outperforms or is comparable to state-of-the-art approaches.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10


  1. 1.

    We used the software for the evidential \(\kappa \)NN classifier with parameter optimization available at: https://www.hds.utc.fr/~tdenoeux/dokuwiki/en/software/k-nn


  1. Bache K, Lichman M (2013) UCI machine learning repository. University of California, School of Information and Computer Sciences. http://archive.ics.uci.edu/ml

  2. Bagley SC, White H, Golomb BA (2001) Logistic regression in the medical literature: standards for use and reporting, with particular attention to one medical domain. J Clin Epidemiol 54(10):979–985

    Article  Google Scholar 

  3. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2:27:1–27:27. http://www.csie.ntu.edu.tw/~cjlin/libsvm

  4. Dempster AP (1966) New methods for reasoning towards posterior distributions based on sample data. Ann Math Stat 37(2):355–374

    MathSciNet  Article  MATH  Google Scholar 

  5. Dempster AP (1968) Upper and lower probabilities generated by a random closed interval. Ann Math Stat 39(3):957–966

    MathSciNet  Article  MATH  Google Scholar 

  6. Denœux T (1995) A k-nearest neighbor classification rule based on dempster-shafer theory. IEEE Trans Syst Man Cybern 25(5):804–813

    Article  Google Scholar 

  7. Denœux T (1997) Analysis of evidence-theoretic decision rules for pattern classification. Pattern Recognit 30(7):1095–1107

    Article  Google Scholar 

  8. Denœux T (2014) Likelihood-based belief function: justification and some extensions to low-quality data. Int J Approx Reason 55(7):1535–1547

    MathSciNet  Article  MATH  Google Scholar 

  9. Denœux T, Smets P (2006) Classification using belief functions: relationship between case-based and model-based approaches. IEEE Trans Syst Man Cybern B 36(6):1395–1406

    Article  Google Scholar 

  10. Duin RPW (2002) The combining classifier: to train or not to train? In: Proceedings of the 16th International Conference on Pattern Recognition, Quebec City, Quebec, Canada, August, 2002, IEEE, vol 2, pp 765–770

  11. Hosmer DW, Lemeshow S, Sturdivant RX (2013) Applied logistic regression, vol 398. Wiley, Hoboken

    Book  MATH  Google Scholar 

  12. Kanjanatarakul O, Sriboonchitta S, Denœux T (2014) Forecasting using belief functions: an application to marketing econometrics. Int J Approx Reason 55(5):1113–1128

    MathSciNet  Article  MATH  Google Scholar 

  13. Kanjanatarakul O, Denœux T, Sriboonchitta S (2016) Prediction of future observations using belief functions: a likelihood-based approach. Int J Approx Reason 72:71–94

    MathSciNet  Article  MATH  Google Scholar 

  14. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, Hoboken

    Book  MATH  Google Scholar 

  15. Minary P, Pichon F, Mercier D, Lefevre E, Droit B (2016) An evidential pixel-based face blurring approach. In: Vejnarov J, Kratochvil V (eds) Proceedings of the 4th International Conference on Belief Functions, Prague, Czech Republic, September 21–23, Springer, Lecture Notes in Computer Science, vol 9861, pp 222–230

  16. Minary P, Pichon F, Mercier D, Lefevre E, Droit B (2017) Evidential joint calibration of binary svm classifiers using logistic regression. In: Proceedings of the 11th international conference on scalable uncertainty management, Granada, Spain, October 4–6, 2017, Lecture Notes in Artificial Intelligence, Springer, p 7

  17. Minka TP (2003) Algorithms for maximum-likelihood logistic regression. Technical Report 758, Carnegie Mellon University

  18. Nguyen HT (2006) An Introduction to Random Sets. Chapman and Hall/CRC Press, Boca Raton

    Book  MATH  Google Scholar 

  19. Platt JC (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv Large Margin Classif 10(3):61–74

    Google Scholar 

  20. Shafer G (1976) A mathematical theory of evidence, vol 1. Princeton University Press, Princeton

    MATH  Google Scholar 

  21. Smets P (1993) Belief functions: the disjunctive rule of combination and the generalized Bayesian theorem. Int J Approx Reason 9(1):1–35

    MathSciNet  Article  MATH  Google Scholar 

  22. Smets P, Kennes R (1994) The transferable belief model. Artif Intell 66:191–243

    MathSciNet  Article  MATH  Google Scholar 

  23. Tulyakov S, Jaeger S, Govindaraju V, Doermann D (2008) Review of classifier combination methods. In: Marinai S, Fujisawa H (eds) Machine learning in document analysis and recognition. Springer, Berlin, pp 361–386

    Google Scholar 

  24. Xu P, Davoine F, Denœux T (2015) Evidential multinomial logistic regression for multiclass classifier calibration. In: Proceedings of the 18th international conference on information fusion, Washington, DC, USA, July 6–9, 2015, IEEE, pp 1106–1112

  25. Xu P, Davoine F, Zha H, Denœux T (2016) Evidential calibration of binary SVM classifiers. Int J Approx Reason 72:55–70

    MathSciNet  Article  MATH  Google Scholar 

  26. Zadrozny B, Elkan C (2001) Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In: Proceedings of the 18th international conference on machine learning, Morgan Kaufmann, pp 609–616

  27. Zadrozny B, Elkan C (2002) Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the 8th international conference on knowledge discovery and data mining, New York, NY, USA, 2002, ACM, pp 694–699

  28. Zhong W, Kwok JT (2013) Accurate probability calibration for multiple classifiers. In: Proceedings of the 23rd international joint conference on artificial intelligence, Beijing, China, August, 2013, pp 1939–1945

  29. Zouhal LM, Denœux T (1998) An evidence-theoretic k-nn rule with parameter optimization. IEEE Trans Syst Man Cybern C 28(2):263–271

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Pauline Minary.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is an extended and revised version of (Minary et al. 2017).

Communicated by A. Di Nola.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Minary, P., Pichon, F., Mercier, D. et al. Evidential joint calibration of binary SVM classifiers. Soft Comput 23, 4655–4671 (2019). https://doi.org/10.1007/s00500-018-3429-x

Download citation


  • Belief functions
  • Information fusion
  • Evidential calibration
  • Classification