Improving Product by Moderating k-NN Classifiers

  • F. M. Alkoot
  • J. Kittler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)


The veto effect caused by contradicting experts outputting zero probability estimates leads to fusion strategies performing sub optimally. This can be resolved using Moderation. The Moderation formula is derived for the k-NN classifier using a bayesian prior. The merits of moderation are examined on real data sets.


Product Rule Improve Product Product Fusion Fusion Strategy Moderation Formula 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    F. M. Alkoot and J. Kittler. Experimental evaluation of expert fusion strategies. Pattern Recognition Letters, 20(11-13):1361–1369, 1999.CrossRefGoogle Scholar
  2. 2.
    Fuad M. Alkoot and J. Kittler. Improving the performance of the product fusion strategy. In Proceedings of the 15th IAPR International Conference on Pattern Recognition, volume 2, pages 164–167, Barcelona, Spain, 2000.CrossRefGoogle Scholar
  3. 3.
    Fuad M. Alkoot and Josef Kittler. Population bias control for bagging knn experts. In In proceedings of Sensor Fusion: Architectures, Algorithms, and Applications V, Orlando, Fl, USA, 2001. SPIE.Google Scholar
  4. 4.
    J. Cao, M. Ahmadi, and M. Shridhar. Recognition of handwritten numerals with multiple feature and multistage classifier. Pattern Recognition, 28(2):153–160, 1995.CrossRefGoogle Scholar
  5. 5.
    T. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, pages 1–22, 1998.Google Scholar
  6. 6.
    T.K. Ho, J.J. Hull, and S.N. Srihari. Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):66–75, 1994.CrossRefGoogle Scholar
  7. 7.
    M I Jordan and R A Jacobs. Hierarchical mixture of experts and the em algorithm. Neural Computation, 6:181–214, 1994.CrossRefGoogle Scholar
  8. 8.
    J. Kittler. Combining classifiers: A theoretical framework. Pattern Analysis and Applications, 1:18–27, 1998.CrossRefGoogle Scholar
  9. 9.
    J. Kittler, M. Hatef, R. Duin, and J. Matas. On combining classifiers. IEEE Transaction on Pattern Analysis and Machine Intelligence, 20(3):226–239, 1998.CrossRefGoogle Scholar
  10. 10.
    D. Lee and S. Srihari. Dynamic classifier combination using neural networks. SPIE, 2422:26–37, 1995.CrossRefGoogle Scholar
  11. 11.
    P. Murphy. Repository of machine learning databases and domain theories., 1999.
  12. 12.
    G. Rogova. Combining the results of several neural network classifiers. Neural Networks, 7(5):777–781, 1994.CrossRefGoogle Scholar
  13. 13.
    K. Tumer and J. Ghosh. Order statistics combiners for neural classifiers. In Proceedings of the World Congress on Neural Networks, volume I, pages 31–34, Washington, DC., 1995.Google Scholar
  14. 14.
    D.H. Wolpert. Stacked generalisation. Neural Networks, 5(2):241–260, 1992.CrossRefGoogle Scholar
  15. 15.
    K Woods, W P Kegelmeyer, and K Bowyer. Combination of multiple experts using local accuracy estimates. IEEE Transaction Pattern Analysis and Machine Intelligence, 19:405–410, 1997.CrossRefGoogle Scholar
  16. 16.
    L. Xu, A. Krzyzak, and C.Y. Suen. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transaction. SMC, 22(3):418–435, 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • F. M. Alkoot
    • 1
  • J. Kittler
    • 1
  1. 1.Centre for Vision, Speech and Signal Processing, School of Electronics, Computing and MathematicsUniversity of SurreyGuildfordUK

Personalised recommendations