Abstract
Uncertainty representation is a major issue in pattern recognition when the outputs of a classifier do not lead directly to a final decision, but are used in combination with other systems, or as input to an interactive decision process. In such contexts, it may be advantageous to resort to rich and flexible formalisms for representing and manipulating uncertain information, such as the Dempster-Shafer theory of Evidence. In this paper, it is shown that the quality and reliability of the outputs from an evidence-theoretic classifier may be improved using an adaptation from a resample-and-combine approach introduced by Breiman and known as "bagging". This approach is explained and studied experimentally using simulated data. In particular, results show that bagging improves classification accuracy and limits the influence of outliers and ambiguous training patterns.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A. Appriou. Uncertain data aggregation in classification and tracking processes. In B. Bouchon-Meunier, editor, Aggregation and Fusion of imperfect information, pages 231 - 260. Physica-Verlag, Heidelberg, 1998.
L. Breiman. Bagging predictors. Machine Learning, 24: 123 - 140, 1996.
T. Denceux. A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. on Systems, Man and Cybernetics, 25 (05): 804 - 813, 1995.
T. Denceux. Analysis of evidence-theoretic decision rules for pattern classification. Pattern Recognition, 30 (7): 1095 - 1107, 1997.
T. Denceux. Application du modèle des croyances transférables en reconnaissance de formes. Traitement du Signal, 14 (5): 443 - 451, 1998.
T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning, 40 (2): 1 - 19, 2000.
B. Efron and R. Tibshirani. An introduction to the bootstrap, volume 57 of Monographs on Statistics and Applied Probability. Chapman and Hall, New York, 1993.
B. D. Ripley. Pattern Recognition and Neural networks. Cambridge University Press, Cambridge, 1996.
G. Rogova. Combining the results of several neural network classifiers. Neural Networks, 7 (5): 777 - 781, 1994.
G. Shafer. A mathematical theory of evidence. Princeton University Press, Princeton, N.J., 1976.
P. Smets. The combination of evidence in the Transferable Belief Model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (5): 447 - 458, 1990.
P. Smets. The application of the Transferable Belief Model to diagnosis problems. International Journal of Intelligent Systems, 13: 127 - 158, 1998.
P. Smets. The Transferable Belief Model for quantified belief representation. In D. M. Gabbay and P. Smets, editors, Handbook of Defeasible reasoning and uncertainty management systems, volume 1, pages 267 - 301. Kluwer Academic Publishers, Dordrecht, 1998.
P. Smets and R. Kennes. The Transferable Belief Model. Artificial Intelligence, 66: 191 - 243, 1994.
L. M. Zouhal and T. Denceux. An evidence-theoretic k-NN rule with parameter optimization. IEEE Trans. on Systems, Man and Cybernetics C, 28 (2): 263 - 271, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
François, J., Grandvalet, Y., Denceux, T., Roger, JM. (2002). Bagging Improves Uncertainty Representation in Evidential Pattern Classification. In: Bouchon-Meunier, B., Gutiérrez-Ríos, J., Magdalena, L., Yager, R.R. (eds) Technologies for Constructing Intelligent Systems 1. Studies in Fuzziness and Soft Computing, vol 89. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1797-3_23
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1797-3_23
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-662-00329-9
Online ISBN: 978-3-7908-1797-3
eBook Packages: Springer Book Archive