Soft Computing

, Volume 21, Issue 23, pp 6919–6932 | Cite as

Comparing dependent combination rules under the belief classifier fusion framework



Data fusion, within the evidence theory framework, consists of obtaining a unique belief function by the combination of several belief functions induced from various information sources. Considerable attention has been paid to combination rules dealing with beliefs induced from non-distinct information sources. The most popular fusion rule is the cautious conjunctive rule, proposed by Denœux. This latter has the empty set, called also the conflict, as an absorbing element. In fact, the mass assigned to the conflict tends toward 1 when applying a high number of the cautious conjunctive operator, and consequently, the conflict loses its initial role as an alarm signal announcing that there is a kind of disagreement between sources. This problem has led to the introduction of the normalized cautious rule which totally ignores the mass assigned to the conflict. An intermediate rule between the cautious conjunctive and the normalized cautious rules, named the cautious Combination With Adaptive Conflict (cautious CWAC), has been proposed to preserve the initial alarm role of the conflict. Despite this diversification, no great effort has been devoted until now to find out the most convenient combination rule. Thus, in this paper, we suggest to evaluate and compare the cautious conjunctive, the normalized cautious and the cautious CWAC rules in order to pick out the most appropriate one within the classifier fusion framework.


Belief function theory Combination rules Dependent information sources Multiclassifier fusion framework 


Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.


  1. Al-Ani A, Deriche M (2002) A new technique for combining multiple classifiers using the Dempster–Shafer theory of evidence. J Artif Intell Res 17:333–361Google Scholar
  2. Bertoni A, Folgieri R, Valentini G (2005) Biomolecular cancer prediction with random subspace ensembles of support vector machines. Neurocomputing 63:535–539CrossRefGoogle Scholar
  3. Bi Y, Guan J, Bell D (2008) The combination of multiple classifiers using an evidential reasoning approach. Artif Intell 172(15):1731–1751CrossRefMATHGoogle Scholar
  4. Boubaker J, Elouedi Z, Lefevre E (2013) Conflict management with dependent information sources in the belief function framework. In: 14th International symposium of computational intelligence and informatics (CINTI). IEEE, vol 52, pp 393–398Google Scholar
  5. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MATHGoogle Scholar
  6. Cattaneo MEGV (2003) Combining belief functions issued from dependent sources. In: 3rd International symposium on imprecise probabilities and their applications (ISIPTA), vol 3Google Scholar
  7. Cho S-B, Kim JH (1995) Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans Syst Man Cybern 25(2):380–384CrossRefGoogle Scholar
  8. Dempster AP (1967) Upper and Lower probabilities induced by a multivalued mapping. Ann Math Stat 38:325–339CrossRefMATHMathSciNetGoogle Scholar
  9. Denoeux T (1999) Reasoning with imprecise belief structures. Int J Approx Reason 20(1):79–111CrossRefMATHMathSciNetGoogle Scholar
  10. Denoeux T (2006) The cautious rule of combination for belief functions and some extension. In: 9th International conference on information fusion (FUSION’2006), pp 1–8Google Scholar
  11. Denoeux T (2008) Conjunctive and disjunctive combination of belief functions induced by nondistinct bodies of evidence. Artif Intell 172(2):234–264CrossRefMATHMathSciNetGoogle Scholar
  12. Denoeux T, Masson M-H (2012) Belief functions: theory and applications. In: 2nd International conference on belief functions, vol 164. Springer, New YorkGoogle Scholar
  13. Dietterich T (2000) An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Mach Learn 40(2):139–157CrossRefGoogle Scholar
  14. Dubois D, Prade H (1988) Representation and combination of uncertainty with belief functions and possibility measures. Comput Intell 4(3):244–264CrossRefGoogle Scholar
  15. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139CrossRefMATHMathSciNetGoogle Scholar
  16. Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42CrossRefMATHGoogle Scholar
  17. Goebel K, Yan W (2004) Choosing classifiers for decision fusion. In: The 7th International Conference on Information Fusion, vol 1, pp 563–568Google Scholar
  18. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. ACM SIGKDD Explor Newsl 11(1):10–18CrossRefGoogle Scholar
  19. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 10:993–1001CrossRefGoogle Scholar
  20. Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844CrossRefGoogle Scholar
  21. Huang YS, Liu K, Suen CY (1995) The combination of multiple classifiers by a neural network approach. Int J Pattern Recognit Artif Intell 9(03):579–597CrossRefGoogle Scholar
  22. Johansson R, Boström H, Karlsson A (2008) A study on class-specifically discounted belief for ensemble classifiers. In: IEEE international conference on multisensor fusion and integration for intelligent systems, pp 614–619Google Scholar
  23. Jousselme A, Grenier D, Bossé E (2001) A new distance between two bodies of evidence. Inf Fusion 2(2):91–101CrossRefGoogle Scholar
  24. Kittler J, Hatef M, Duin RP, Matas J (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239CrossRefGoogle Scholar
  25. Kuncheva L, Rodríguez J, Plumpton C, Linden D, Johnston S (2010) Random subspace ensembles for FMRI classification. IEEE Trans Med Imaging 29(2):531–542CrossRefGoogle Scholar
  26. Kuncheva L, Skurichina M, Duin RP (2002) An experimental study on diversity for bagging and boosting with linear classifiers. Inf Fusion 3(4):245–258CrossRefGoogle Scholar
  27. Kuncheva L, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207CrossRefMATHGoogle Scholar
  28. Le CA, Huynh V-N, Shimazu A, Nakamori Y (2007) Combining classifiers for word sense disambiguation based on Dempster–Shafer theory and OWA operators. Data KnowlEng 63(2):381–396CrossRefGoogle Scholar
  29. Lefevre E, Colot O, Vannoorenberghe P (2002) Belief function combination and conflict management. Inf Fusion 3(2):149–162CrossRefGoogle Scholar
  30. Lefevre E, Elouedi Z (2013) How to preserve the conflict as an alarm in the combination of belief functions? Decis Support Syst 56:326–333CrossRefGoogle Scholar
  31. Mercier D, Cron G, Denoeux T, Masson M (2005) Fusion of multi-level decision systems using the Transferable Belief Model. In: 8th International conference on information fusion (fusion’2005), vol 2, pp 655–658Google Scholar
  32. Murphy CK (2000) Combining belief functions when evidence conflicts. Decis Support Syst 29(1):1–9CrossRefGoogle Scholar
  33. Murphy P, Aha D (1996) UCI repository databases.
  34. Pizzi NJ, Pedrycz W (2010) Aggregating multiple classification results using fuzzy integration and stochastic feature selection. Int J Approx Reason 51(8):883–894CrossRefGoogle Scholar
  35. Quost B, Denoeux T, Masson M-H (2007) Pairwise classifier combination using belief functions. Pattern Recogn Lett 28(5):644–653CrossRefGoogle Scholar
  36. Quost B, Denoeux T, Masson M-H (2008) Adapting a combination rule to non-independent information sources. In: 12th Information processing and management of uncertainty in knowledge-based systems (IPMU), pp 448–455Google Scholar
  37. Quost B, Masson M-H, Denoeux T (2011) Classifier fusion in the Dempster–Shafer framework using optimized t-norm based combination rules. Int J Approx Reason 52(3):353–374CrossRefMathSciNetGoogle Scholar
  38. Reformat M, Yager RR (2008) Building ensemble classifiers using belief functions and OWA operators. Soft Comput 12(6):543–558CrossRefMATHGoogle Scholar
  39. Rodriguez JJ, Kuncheva LI, Alonso CJ (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10):1619–1630CrossRefGoogle Scholar
  40. Ruta D, Gabrys B (2005) Classifier selection for majority voting. Inf Fusion 6(1):63–81CrossRefMATHGoogle Scholar
  41. Shafer G (1976) A mathematical theory of evidence, vol 1. Princeton University Press, PrincetonMATHGoogle Scholar
  42. Sharkey AJ, Sharkey NE (1997) Combining diverse neural nets. Knowl Eng Rev 12(03):231–247CrossRefGoogle Scholar
  43. Smets P (1988) The Transferable Belief Model for quantified belief representation. In: Handbook of defeasible reasoning and uncertainty management systems, vol 1, pp 267–301Google Scholar
  44. Smets P (1990a) The combination of evidence in the transferable belief model. IEEE Trans Pattern Anal Mach Intell 12(5):447–458CrossRefGoogle Scholar
  45. Smets P (1990b) The combination of evidence in the transferable belief model. IEEE Trans Pattern Anal Mach Intell 12(5):447–458CrossRefGoogle Scholar
  46. Smets P (1995) The canonical decomposition of a weighted belief. In: 14th International joint conference on artificial intelligence (IJCAI), vol 95, pp 1896–1901Google Scholar
  47. Smets P (1998) The application of the transferable belief model to diagnostic problems. Int J Intell Syst 13:127–157CrossRefMATHGoogle Scholar
  48. Trabelsi A, Elouedi Z, Lefevre E (2015a) Belief function combination: comparative study in classifier fusion framework. In: 1st International symposium on advanced intelligent systems and informatics (AISI), vol 407, pp 425–435Google Scholar
  49. Trabelsi A, Elouedi Z, Lefevre E (2015b) Classifier fusion within the belief function framework using dependent combination rules. In: 22nd International symposium on methodologies for intelligent systems (ISMIS), vol 9384, pp 133–138Google Scholar
  50. Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241–259CrossRefGoogle Scholar
  51. Woods K, Kegelmeyer WP, Bowyer K Jr (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intell 19:405–410CrossRefGoogle Scholar
  52. Xu L, Krzyzak A, Suen CY (1992) Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybern A Syst Humans 22(3):418–435Google Scholar
  53. Xu P, Davoine F, Denoeux T (2014) Evidential logistic regression for binary SVM classifier calibration. In: 3rd International conference on belief functions (BELIEF). Springer, New York, pp 49–57Google Scholar
  54. Xu P, Davoine F, Zha H, Denoeux T (2016) Evidential calibration of binary SVM classifiers. Int J Approx Reason 72:55–70CrossRefMATHMathSciNetGoogle Scholar
  55. Yager RR (1987) On the Dempster–Shafer framework and new combination rules. Inf Sci 41(2):93–137CrossRefMATHMathSciNetGoogle Scholar
  56. Yen J (1990) Generalizing the Dempster–Shafer theory to fuzzy sets. IEEE Trans Syst Man Cybern 20(3):559–570CrossRefMATHMathSciNetGoogle Scholar
  57. Zadeh LA (1986) A simple view of the Dempster–Shafer theory of evidence and its implication for the rule of combination. AI Mag 7(2):85–90Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.LARODEC, Institut Supérieur de Gestion de TunisUniversité de TunisTunisTunisia
  2. 2.EA 3926, Laboratoire de Génie Informatique et d’Automatique de l’Artois (LGI2A)University ArtoisBéthuneFrance

Personalised recommendations