Diversity-Based Classifier Selection for Adaptive Object Tracking

  • Ingrid Visentini
  • Josef Kittler
  • Gian Luca Foresti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5519)

Abstract

In this work we propose a novel pairwise diversity measure, that recalls the Fisher linear discriminant, to construct a classifier ensemble for tracking a non-rigid object in a complex environment. A subset of constantly updated classifiers is selected exploiting their capability to distinguish the target from the background and, at the same time, promoting independent errors. This reduced ensemble is employed in the target search phase, speeding up the application of the system and maintaining the performance comparable to state of the art algorithms. Experiments have been conducted on a Pan-Tilt-Zoom camera video sequence to demonstrate the effectiveness of the proposed approach coping with pose variations of the target.

Keywords

Video Sequence Local Binary Pattern Search Phase Independent Error Fisher Linear Discriminant 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2007)CrossRefGoogle Scholar
  2. 2.
    Bian, S., Wang, W.: On diversity and accuracy of homogeneous and heterogeneous ensembles. Int. J. Hybrid Intell. Syst. 4(2), 103–128 (2007)CrossRefMATHGoogle Scholar
  3. 3.
    Brown, G., Wyatt, J.L., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  4. 4.
    Chandra, A., Chen, H., Yao, X.: Trade-off between diversity and accuracy in ensemble generation. In: Jin, Y. (ed.) Multi-Objective Machine Learning. Studies in Computational Intelligence, vol. 16, pp. 429–464. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Golestani, A., Ahmadian, K., Amiri, A., JahedMotlagh, M.-R.: A novel adaptive-boost-based strategy for combining classifiers using diversity concept. In: ACIS-ICIS, pp. 128–134 (2007)Google Scholar
  6. 6.
    Hong, L., Page, S.E.: Diversity and optimality. Research in Economics 98-08-077e, Santa Fe Institute (August 1998)Google Scholar
  7. 7.
    Kapp, M.N., Sabourin, R., Maupin, P.: An empirical study on diversity measures and margin theory for ensembles of classifiers. In: 10th International Conference on Information Fusion, July 2007, pp. 1–8 (2007)Google Scholar
  8. 8.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 226–239 (1998)CrossRefGoogle Scholar
  9. 9.
    Kohavi, R., Wolpert, D.H.: Bias plus variance decomposition for zero-one loss functions. In: Saitta, L. (ed.) International Conference on Machine Learning, pp. 275–283. Morgan Kaufmann, San Francisco (1996)Google Scholar
  10. 10.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. Advances in Neural Information Processing Systems 7, 231–238 (1995)Google Scholar
  11. 11.
    Kuncheva, L.I.: That elusive diversity in classifier ensembles. In: Perales, F.J., Campilho, A.C., Pérez, N., Sanfeliu, A. (eds.) IbPRIA 2003. LNCS, vol. 2652, pp. 1126–1138. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  12. 12.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles. Machine Learning 51, 181–207 (2003)CrossRefMATHGoogle Scholar
  13. 13.
    Kuncheva, L.I., Rodriguez, J.J.: Classifier ensembles with a random linear oracle. IEEE Transactions on Knowledge and Data Engineering 19(4), 500–508 (2007)CrossRefGoogle Scholar
  14. 14.
    Ojala, T., Pietikäinen, M., Mäenpää, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 971–987 (2002)CrossRefMATHGoogle Scholar
  15. 15.
    Oza, N.C.: Online bagging and boosting. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, October 2005, vol. 3, pp. 2340–2345 (2005)Google Scholar
  16. 16.
    Poh, N., Bengio, S.: How do correlation and variance of base classifiers affect fusion in biometric authentication tasks? IEEE Trans. on Sig. Processing 53(11), 4384–4396 (2005)CrossRefGoogle Scholar
  17. 17.
    Polikar, R.: Ensemble based systems in decision making. IEEE Circuits and Systems Magazine 6(3), 21–45 (2006) (Third Quarter)CrossRefGoogle Scholar
  18. 18.
    Tang, E.K., Suganthan, P.N., Yao, X.: An analysis of diversity measures. Machine Learning 65(1), 247–271 (2006)CrossRefGoogle Scholar
  19. 19.
    Wenyao, L., Zhaohui, W., Pan, G.: An entropy-based diversity measure for classifier combining and its application to face classifier ensemble thinning. In: Li, S.Z., Lai, J.-H., Tan, T., Feng, G.-C., Wang, Y. (eds.) SINOBIOMETRICS 2004. LNCS, vol. 3338, pp. 118–124. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  20. 20.
    Windeatt, T.: Accuracy/diversity and ensemble mlp classifier design. IEEE Transactions on Neural Networks 17(5), 1194–1211 (2006)CrossRefGoogle Scholar
  21. 21.
    Udny Yule, G.: On the association of attributes in statistics. Philosophical Transactions of the Royal Society of London 194, 257–319 (1900)CrossRefGoogle Scholar
  22. 22.
    Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1-2), 239–263 (2002)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Ingrid Visentini
    • 1
  • Josef Kittler
    • 2
  • Gian Luca Foresti
    • 1
  1. 1.Dept of Mathematics and Computer ScienceUniversity of UdineUdineItaly
  2. 2.CVSSPUniversity of SurreyGuildfordUK

Personalised recommendations