A Framework for Classifier Fusion: Is It Still Needed?

  • Josef Kittler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1876)

Abstract

We consider the problem and issues of classifier fusion and discuss how they should be reflected in the fusion system architecture. We adopt the Bayesian viewpoint and show how this leads to classifier output moderation to compensate for sampling problems. We then discuss how the moderated outputs should be combined to reflect the prior distribution of models underlying the classifier designs.We then elaborate how the final stage of fusion should combine the complementary measurement information that might be available to different experts. This process is embodied in an overall architecture which shows why the fusion of raw expert outputs is a nonlinear function of the expert outputs and how this function can be realised as a sequence of relatively simple processes.

Keywords

Support Vector Machine Feature Selection Machine Intelligence Output Moderation Discriminatory Information 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Devijver, P.A., Kittler, J.: Pattern Recognition:A Statistical Approach. Prentice-Hall, Englewood Cliffs, N.J. (1982).MATHGoogle Scholar
  2. 2.
    Titterington, D., Smith, A., Makov, U.: Statistical Analysis of Finite Mixture Distributions. John Wiley and Sons, Chichester (1985).MATHGoogle Scholar
  3. 3.
    Akaike, H.: A New Look at Statistical Model Identification. IEEE Trans. Automatic Control 19 (1994) 716–723.CrossRefMathSciNetGoogle Scholar
  4. 4.
    Schwarz, G.: Estimating the Dimension of a Model. The Annals of Statistics 6 (1978) 461–464.MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Sardo, L., Kittler, J.: Minimum Complexity Estimator for RBF Networks Architecture Selection. Proc. International Conference on Neural Networks,Washington (1996) 137–142.Google Scholar
  6. 6.
    Sardo, L., Kittler, J.: Model ComplexityValidation for PDF Estimation Using Gaussian Mixtures. Proc. 14th International Conference on Pattern Recognition, Brisbane (1998) 195–197.Google Scholar
  7. 7.
    Quinlen, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, Calif. (1993).Google Scholar
  8. 8.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. JohnWiley, NewYork (1998).Google Scholar
  9. 9.
    Pudil, P., Novovičová, J., Kittler, J.: Floating Search Methods in Feature Selection. Pattern Recognition Letters 15 (1994) 1119–1125.CrossRefGoogle Scholar
  10. 10.
    Pudil, P., Novovičová, J., Choakjarerwanit, N., Kittler, J.: Feature selection based on the approximation of class densities by finite mixtures of special type. Pattern Recognition 28 (1995) 1389–1397.CrossRefGoogle Scholar
  11. 11.
    Pudil, P., Novovičová, J.: Novel Methods for Subset Selection with Respect to Problem Knowledge. IEEE Transactions on Intelligent Systems — Special Issue on Feature Transformation and Subset Selection (1998) 66–74.Google Scholar
  12. 12.
    Jain, A.K., Zongker, D.: Feature Selection: Evaluation, Application and Small Sample Performance. IEEE Transactions on PAMI 19 (1997) 153–158.Google Scholar
  13. 13.
    Novovičová, J., Pudil, P., Kittler, J.: Divergence based feature selection for multimodal class densities. IEEE Transactions on PAMI, 18 (1996) 218–223.Google Scholar
  14. 14.
    Somol, P., Pudil, P., Novovičová J., Paclik, P.: Adaptive floating search methods in feature selection. Pattern Recognition Letters 20 (1999) 1157–1163.CrossRefGoogle Scholar
  15. 15.
    Somol, P., Pudil, P.: Oscillating Search Algorithms For Feature Selection. Proc. 15th IAPR International Conference on Pattern Recognition, Barcelona (2000).Google Scholar
  16. 16.
    Somol, P., Pudil, P., Ferri, F.J., Kittler, J.: Fast Branch and Bound Algorithm For Feature Selection. Proc 4thWorld Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida (2000).Google Scholar
  17. 17.
    Mayer, H.A., Somol, P., Pudil, P., Grim, J., Huber R., Schwaiger, R.: A Comparison of Deterministic and Non-Deterministic Feature Selection Algorithms for k-NN, Gaussian, and Neural Classifiers. Proc. 4thWorld Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida (2000).Google Scholar
  18. 18.
    Ferri, F.J., Kadirkamanathan, V., Kittler, J.: Feature Subset Search Using Genetic Algorithms. Proc IEE Workshop on Natural Algorithms in Signal Processing (1993) 23-1–23-7.Google Scholar
  19. 19.
    Mayer, H.A., Somol, P., Huber, R., Pudil, P.: Improving Statistical Measures of Feature Subsets by Conventional and Evolutionary Approaches. Proc. 3rd IAPR International Workshop on Statistical Techniques in Pattern Recognition, Alicante, (2000).Google Scholar
  20. 20.
    Alkoot F.M., Kittler, J.: Multiple Expert System Design by Combined Feature Selection and Probability Level Fusion. Proc. Conf. Fusion 2000, Paris (2000).Google Scholar
  21. 21.
    Alkoot F.M., Kittler, J.: Feature Selection for an Ensemble of Classifiers. Proc. 4th World Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida (2000).Google Scholar
  22. 22.
    Kittler, J., Hancock, E.R.: Combining Evidence in Probabilistic Relaxation. International Journal of Pattern Recognition and Artificial Intelligence, 3 (1989) 29–51.CrossRefGoogle Scholar
  23. 23.
    Christmas, W.J., Kittler, J., Petrou, M.: Structural Matching in ComputerVision Using Probabilistic Relaxation. IEEETransPattern Analysis andMachine Intelligence, 17 (1995) 749–764.CrossRefGoogle Scholar
  24. 24.
    Kittler, J.: Probabilistic Relaxation and the Hough Transform. Pattern Recognition 33 (2000) 705–714.CrossRefGoogle Scholar
  25. 25.
    Shanmugan, K.S., Breipohl, A.M.: Random Signals: Detection, Estimation and DataAnalysis. Wiley, NewYork (1988).Google Scholar
  26. 26.
    Bedworth, M.: High level data fusion. PhDThesis, Aston University, United Kingdom (1999).Google Scholar
  27. 27.
    Jonsson, K., Kittler, J., Li Y.P., Matas, J.: Support Vector Machine for Face Authentication. In Proceeding of BMVC’99 (1999) 543–553.Google Scholar
  28. 28.
    Jonsson, K., Kittler, J., Matas, J.: Learning Support Vectors for Face Authentication: Sensitivity to Mis-Registrations. Proceeding of ACCV’00, Taipei (2000) 806–811.Google Scholar
  29. 29.
    Murphy, P.: Repository of machine learning databases and domain theories. ftp://ftp.ics.uci.edu/pub/machine-learning-databases (1999).
  30. 30.
    Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: A review. IEEE Trans. on Pattern Analysis and Machine Intelligence PAMI-22 (2000) 4–37.CrossRefGoogle Scholar
  31. 31.
    Bishop, C.J., Neural networks for pattern recognition. Clarendon Press, Oxford (1995).Google Scholar
  32. 32.
    Breiman, L., Friedman, J.H., Olsen, R.A., Stone, C.J.: Classification and Regression Trees, Wadsworth, California (1984).MATHGoogle Scholar
  33. 33.
    Christmas, W.J., Kittler, J., Petrou, M.: Analytical Approaches to Neural Network Design. in Multiple Paradigms, Comparative Studies and Hybrid Systems, eds E S Gelsema, and L N Kanal, North Holland (1994) 325–335.Google Scholar
  34. 34.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On Combining Classifiers. IEEE Trans. Pattern Analysis and Machine Intelligence 20 (1998) 226–239.CrossRefGoogle Scholar
  35. 35.
    Kittler, J.: Combining Classifiers:A Theoretical Framework. Pattern Analysis and Applications 1 (1998) 18–27.CrossRefGoogle Scholar
  36. 36.
    Fairhurst, M.C., Abdel Wahab, H.M.S: An interactive two-level architecture for a memory network pattern classifier. Pattern Recognition Letters 11 (1990) 537–540.MATHCrossRefGoogle Scholar
  37. 37.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Trans. on Pattern Analysis and Machine Intelligence 16 (1994) 66–75.CrossRefGoogle Scholar
  38. 38.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5 (1992) 241–260.CrossRefGoogle Scholar
  39. 39.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. SMC 22 (1992) 418–435.Google Scholar
  40. 40.
    Kittler, J., Matas, J., Jonsson, K., Ramos Sánchez, M.U.: Combining evidence in personal identity verification systems. Pattern Recognition Letters 18 (1997) 845–852.CrossRefGoogle Scholar
  41. 41.
    Tumer, K., Ghosh, J.: Analysis of Decision Boundaries in Linearly Combined Neural Classifiers. Pattern Recognition, 29 (1996) 341–348.CrossRefGoogle Scholar
  42. 42.
    Woods, K.S., Bowyer, K., Kergelmeyer, W.P.: Combination of multiple classifiers using local accuracy estimates. Proc. of CVPR96 (1996), 391–396.Google Scholar
  43. 43.
    Kittler, J., Hojjatoleslami, A., Windeatt, T.: Strategies for combining classifiers employing shared and distinct pattern representations. Pattern Recognition Letters 18 (1997) 1373–1377.CrossRefGoogle Scholar
  44. 44.
    Huang, T.S., Suen, C.Y.: Combination of multiple experts for the recognition of unconstrained handwritten numerals. IEEE Trans Pattern Analysis and Machine Intelligence 17 (1995) 90–94.CrossRefGoogle Scholar
  45. 45.
    Alkoot, F.M., Kittler, J.: Improving the performance of the product fusion strategy. Proc. 15th IAPR International Conference on Pattern Recognition, Barcelona (2000).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Josef Kittler
    • 1
  1. 1.Centre for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUK

Personalised recommendations