Hierarchical Neural Networks Utilising Dempster-Shafer Evidence Theory

  • Rebecca Fay
  • Friedhelm Schwenker
  • Christian Thiel
  • Günther Palm
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4087)


Hierarchical neural networks show many benefits when employed for classification problems even when only simple methods analogous to decision trees are used to retrieve the classification result. More complex ways of evaluating the hierarchy output that take into account the complete information the hierarchy provides yield improved classification results. Due to the hierarchical output space decomposition that is inherent to hierarchical neural networks the usage of Dempster-Shafer evidence theory suggests itself as it allows for the representation of evidence at different levels of abstraction. Moreover, it provides the possibility to differentiate between uncertainty and ignorance. The proposed approach has been evaluated using three different data sets and showed consistently improved classification results compared to the simple decision-tree-like retrieval method.


Support Vector Machine Radial Basis Function Network Belief Function Successor Node Basic Probability Assignment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Chen, Y., Crawford, M., Ghosh, J.: Integrating support vector machines in a hierarchical output space decomposition framework. In: IEEE International Geoscience and Remote Sensing Symposium, vol. II, pp. 949–952 (2004)Google Scholar
  2. 2.
    Kumar, S., Ghosh, J., Crawford, M.: Hierarchical fusion of multiple classifiers for hyperspectral data analysis. International Journal on Pattern Analysis and Applications 5(2), 210–220 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Cheong, S., Oh, S., Lee, S.Y.: Support vector machines with binary tree architecture for multi-class classification. Neural Information Processing - Letters and Reviews 2(3), 47–51 (2004)Google Scholar
  4. 4.
    Schwenker, F.: Solving multi-class pattern recognition problems with tree structured support vector machines. In: Radig, B., Florczyk, S. (eds.) Mustererkennung 2001, pp. 283–290. Springer, Heidelberg (2001)Google Scholar
  5. 5.
    Simon, S., Schwenker, F., Kestler, H.A., Kraetzschmar, G.K., Palm, G.: Hierarchical object classification for autonomous mobile robots. In: International Conference on Artificial Neural Networks (ICANN), pp. 831–836 (2002)Google Scholar
  6. 6.
    Shafer, G.: A Mathematical Theory of Evidence. University Press, Princeton (1976)zbMATHGoogle Scholar
  7. 7.
    Dempster, A.P.: A generalization of bayesian inference. Journal of the Royal Statistical Society (30), 205–247 (1968)MathSciNetGoogle Scholar
  8. 8.
    Smets, P., Kennes, R.: The transferable belief model. Artificial Intelligence 66(2), 191–234 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Smets, P.: The combination of evidence in the transferable belief model. IEEE Transactions on Pattern Analysis and Machine Learning 12(5), 447–458 (1990)CrossRefGoogle Scholar
  10. 10.
    Schwenker, F., Kestler, H.A., Palm, G.: Three learning phases for radial-basis-function networks. Neural Networks 14, 439–458 (2001)CrossRefGoogle Scholar
  11. 11.
    Nene, S.A., Nayar, S.K., Murase, H.: Columbia object image library (coil-20). Technical Report Technical Report CUCS-005-96, Columbia University (1996)Google Scholar
  12. 12.
    Frey, P.W., Slate, D.J.: Letter recognition using holland-style adaptive classifiers. Machine Learning 6(2), 161–182 (1991)Google Scholar
  13. 13.
    Kressel, U.H.G.: The impact of the learning-set size in handwritten-digit recognition. In: Proceedings of the International Confernece on Artificial Neural Networks, ICANN 1991, pp. 1685–1689. Elsevier Science, Amsterdam (1991)Google Scholar
  14. 14.
    Bouckaert, R.R., Eibe, F.: Evaluating the replicability of significance tests for comparing learning algorithms. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 3–12. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  15. 15.
    Rajan, S., Ghosh, J.: An empirical comparison of hierarchical vs. two-level approaches to multiclass problems. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 283–292. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Mandler, E., Schürmann, J.: Combining the classification results of independent classifiers based on the dempaster/shafer theory of evidence. In: Pattern Recognition and Artificial Intelligence PRAI, pp. 381–393 (1988)Google Scholar
  17. 17.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their application to handwriting recognition. IEEE Transaction on Systems, Man and Cybernetics 22(3), 418–435 (1992)CrossRefGoogle Scholar
  18. 18.
    Rogova, G.: Combining the results of several neural network classifiers. Neural Networks 7(5), 777–781 (1994)CrossRefGoogle Scholar
  19. 19.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: An experimental comparison. Pattern Recognition 34(2), 299–314 (2001)zbMATHCrossRefGoogle Scholar
  20. 20.
    Al-Ani, A.: A new technique for combining multiple classifiers using the dempster-shafer theory of evidence. Journal of Artificial Intelligence Research 17, 333–361 (2002)zbMATHMathSciNetGoogle Scholar
  21. 21.
    Thiel, C., Schwenker, F., Palm, G.: Using dempster-shafer theory in mcf systems to reject samples. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 118–127. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  22. 22.
    Milisavljevic, N., Bloch, I.: Sensor fusion in anti-personnel mine detection using a two-level belief function model. IEEE Transactions on Systems, Man and Cybernetics - Part C: Applications and Reviews 33(2), 269–283 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rebecca Fay
    • 1
  • Friedhelm Schwenker
    • 1
  • Christian Thiel
    • 1
  • Günther Palm
    • 1
  1. 1.Department of Neural Information ProcessingUniversity of UlmUlmGermany

Personalised recommendations