Advertisement

Fast and Adaptive Deep Fusion Learning for Detecting Visual Objects

  • Nikolaos Doulamis
  • Anastasios Doulamis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7585)

Abstract

Currently, object tracking/detection is based on a “shallow learning” paradigm; they locally process features to build an object model and then they apply adaptive methodologies to estimate model parameters. However, such an approach presents the drawback of losing the “whole picture information” required to maintain a stable tracking for long time and high visual changes. To overcome these obstacles, we need a “deep” information fusion framework. Deep learning is a new emerging research area that simulates the efficiency and robustness by which the humans’ brain represents information; it deeply propagates data into complex hierarchies. However, implementing a deep fusion learning paradigm in a machine presents research challenges mainly due to the highly non-linear structures involved and the “curse of dimensionality”. Another difficulty which is critical in computer vision applications is that learning should be self adapted to guarantee stable object detection over long time spans. In this paper, we propose a novel fast (in real-time) and adaptive information fusion strategy that exploits the deep learning paradigm. The proposed framework integrates optimization strategies able to update in real-time the non-linear model parameters according in a way to trust, as much as possible, the current changes of the environment, while providing a minimal degradation of the previous gained experience.

Keywords

Deep Learning Unlabelled Data Actual Probability Vector Stable Tracking Adaptive Methodology 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Stalder, S., Grabner, H., van Gool, L.: Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition. In: Proc. of IEEE ICCV, pp. 1409–1416 (2009)Google Scholar
  2. 2.
    Henriques, J.F., Caseiro, R., Batista, J.: Globally optimal solution to multi-object tracking with merged measurements. In: Proc. of IEEE ICCV, pp. 2470–2477 (2011)Google Scholar
  3. 3.
    Hinton, G., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Lepetit, V., Lagger, P., Fua, P.: Randomized trees for real-time keypoint recognition. In: Proc. IEEE CVPR, vol. 2, pp. 775–781 (2005)Google Scholar
  5. 5.
    Collins, R.T., Liu, Y.: On-line selection of discriminative tracking features. In: Proc. of IEEE ICCV, vol. 1, pp. 346–352 (2003)Google Scholar
  6. 6.
    Doulamis, A., Ntalianis, K., Doulamis, N., Kollias, S.: An Efficient Fully-Unsupervised Video Object Segmentation Scheme Using an Adaptive Neural Network Classifier Architecture. IEEE Trans. on NNs 14(3), 616–630 (2003)CrossRefGoogle Scholar
  7. 7.
    Matthews, L., Ishikawa, T., Baker, S.: The template update problem. IEEE Trans. on Pattern Analysis and Machine Intelligence 26(6), 810–815 (2004)CrossRefGoogle Scholar
  8. 8.
    Grabner, H., Leistner, C., Bischof, H.: Semi-supervised On-Line Boosting for Robust Tracking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 234–247. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Tang, F., Brennan, S., Zhao, Q., Tao, H.: Co-tracking using semi-supervised support vector machines. In: Proc. of IEEE ICCV, pp. 1–8 (2007)Google Scholar
  10. 10.
    Yu, Q., Dinh, T.B., Medioni, G.: Online Tracking and Reacquisition Using Co-trained Generative and Discriminative Trackers. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 678–691. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  11. 11.
    Cehovin, L., Kristan, M., Leonardis, A.: An adaptive coupled-layer visual model for robust visual tracking. In: Proc. of IEEE ICCV, pp. 1363–1370 (2011)Google Scholar
  12. 12.
    Xing, J., Ai, H., Lao, S.: Multi-object tracking through occlusions by local tracklets filtering and global tracklets association with detection responses. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1200–1207 (2009)Google Scholar
  13. 13.
    Zhang, L., Yuan, L., Nevatia, R.: Global data association for multi-object tracking using network flows. In: Proc. of IEEE CVPR, pp. 1–8 (2008)Google Scholar
  14. 14.
    Nair, V., Hinton, G.: 3-D object recognition with deep belief nets. In: Proc. NIPS (2009)Google Scholar
  15. 15.
    Taylor, G., Hinton, G.E., Roweis, S.: Modeling human motion using binary latent variables. In: Proc. NIPS (2007)Google Scholar
  16. 16.
    Schulz, H., Behnke, S.: Object-Class Segmentation using Deep Convolutional Neural Networks. In: Proc. of DAGM Workshop, Frankfurt (August 2011)Google Scholar
  17. 17.
    Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proc. of IEEE CVPR, Fort Colins, CO (June 1999)Google Scholar
  18. 18.
    Moghaddam, Z., Piccardi, M.: Robust density modelling using the student’s t-distribution for human action recognition. In: Proc. of IEEE ICIP, pp. 3261–3264 (2011)Google Scholar
  19. 19.
    Voulodimos, A., Kosmopoulos, D.I., Vasileiou, G., Sardis, E., Doulamis, A.D., Anagnostopoulos, V., Lalos, C., Varvarigou, T.A.: A dataset for workflow recognition in industrial scenes. In: Proc. of IEEE ICIP, pp. 3249–3252 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Nikolaos Doulamis
    • 1
  • Anastasios Doulamis
    • 2
  1. 1.National Technical University of AthensZografouGreece
  2. 2.Technical University of CreteChaniaGreece

Personalised recommendations