The Visual Object Tracking VOT2016 Challenge Results

  • Matej Kristan
  • Aleš Leonardis
  • Jiři Matas
  • Michael Felsberg
  • Roman Pflugfelder
  • Luka Čehovin
  • Tomáš Vojír̃
  • Gustav Häger
  • Alan Lukežič
  • Gustavo Fernández
  • Abhinav Gupta
  • Alfredo Petrosino
  • Alireza Memarmoghadam
  • Alvaro Garcia-Martin
  • Andrés Solís Montero
  • Andrea Vedaldi
  • Andreas Robinson
  • Andy J. Ma
  • Anton Varfolomieiev
  • Aydin Alatan
  • Aykut Erdem
  • Bernard Ghanem
  • Bin Liu
  • Bohyung Han
  • Brais Martinez
  • Chang-Ming Chang
  • Changsheng Xu
  • Chong Sun
  • Daijin Kim
  • Dapeng Chen
  • Dawei Du
  • Deepak Mishra
  • Dit-Yan Yeung
  • Erhan Gundogdu
  • Erkut Erdem
  • Fahad Khan
  • Fatih Porikli
  • Fei Zhao
  • Filiz Bunyak
  • Francesco Battistone
  • Gao Zhu
  • Giorgio Roffo
  • Gorthi R. K. Sai Subrahmanyam
  • Guilherme Bastos
  • Guna Seetharaman
  • Henry Medeiros
  • Hongdong Li
  • Honggang Qi
  • Horst Bischof
  • Horst Possegger
  • Huchuan Lu
  • Hyemin Lee
  • Hyeonseob Nam
  • Hyung Jin Chang
  • Isabela Drummond
  • Jack Valmadre
  • Jae-chan Jeong
  • Jae-il Cho
  • Jae-Yeong Lee
  • Jianke Zhu
  • Jiayi Feng
  • Jin Gao
  • Jin Young Choi
  • Jingjing Xiao
  • Ji-Wan Kim
  • Jiyeoup Jeong
  • João F. Henriques
  • Jochen Lang
  • Jongwon Choi
  • Jose M. Martinez
  • Junliang Xing
  • Junyu Gao
  • Kannappan Palaniappan
  • Karel Lebeda
  • Ke Gao
  • Krystian Mikolajczyk
  • Lei Qin
  • Lijun Wang
  • Longyin Wen
  • Luca Bertinetto
  • Madan Kumar Rapuru
  • Mahdieh Poostchi
  • Mario Maresca
  • Martin Danelljan
  • Matthias Mueller
  • Mengdan Zhang
  • Michael Arens
  • Michel Valstar
  • Ming Tang
  • Mooyeol Baek
  • Muhammad Haris Khan
  • Naiyan Wang
  • Nana Fan
  • Noor Al-Shakarji
  • Ondrej Miksik
  • Osman Akin
  • Payman Moallem
  • Pedro Senna
  • Philip H. S. Torr
  • Pong C. Yuen
  • Qingming Huang
  • Rafael Martin-Nieto
  • Rengarajan Pelapur
  • Richard Bowden
  • Robert Laganière
  • Rustam Stolkin
  • Ryan Walsh
  • Sebastian B. Krah
  • Shengkun Li
  • Shengping Zhang
  • Shizeng Yao
  • Simon Hadfield
  • Simone Melzi
  • Siwei Lyu
  • Siyi Li
  • Stefan Becker
  • Stuart Golodetz
  • Sumithra Kakanuru
  • Sunglok Choi
  • Tao Hu
  • Thomas Mauthner
  • Tianzhu Zhang
  • Tony Pridmore
  • Vincenzo Santopietro
  • Weiming Hu
  • Wenbo Li
  • Wolfgang Hübner
  • Xiangyuan Lan
  • Xiaomeng Wang
  • Xin Li
  • Yang Li
  • Yiannis Demiris
  • Yifan Wang
  • Yuankai Qi
  • Zejian Yuan
  • Zexiong Cai
  • Zhan Xu
  • Zhenyu He
  • Zhizhen Chi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9914)

Abstract

The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

Keywords

Performance evaluation Short-term single-object trackers VOT 

References

  1. 1.
    Gavrila, D.M.: The visual analysis of human movement: a survey. Comp. Vis. Image Underst. 73(1), 82–98 (1999)CrossRefMATHGoogle Scholar
  2. 2.
    Moeslund, T.B., Granum, E.: A survey of computer vision-based human motion capture. Comp. Vis. Image Underst. 81(3), 231–268 (2001)CrossRefMATHGoogle Scholar
  3. 3.
    Gabriel, P., Verly, J., Piater, J., Genon, A.: The state of the art in multiple object tracking under occlusion in video sequences. In: Proceedings of Advanced Concepts for Intelligent Vision Systems, pp. 166–173 (2003)Google Scholar
  4. 4.
    Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. C 34(30) 334–352 (2004)Google Scholar
  5. 5.
    Moeslund, T.B., Hilton, A., Kruger, V.: A survey of advances in vision-based human motion capture and analysis. Comp. Vis. Image Underst. 103(2–3), 90–126 (2006)CrossRefGoogle Scholar
  6. 6.
    Yilmaz, A., Shah, M.: Object tracking: a survey. J. ACM Comput. Surv. 38(4), 13 (2006)CrossRefGoogle Scholar
  7. 7.
    Yang, H., Shao, L., Zheng, F., Wang, L., Song, Z.: Recent advances and trends in visual tracking: a review. Neurocomputing 74(18), 3823–3831 (2011)CrossRefGoogle Scholar
  8. 8.
    Zhang, S., Yao, H., Sun, X., Lu, X.: Sparse coding based visual tracking: review and experimental comparison. Pattern Recogn. 46(7), 1772–1788 (2013)CrossRefGoogle Scholar
  9. 9.
    Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A.R., Van den Hengel, A.: A survey of appearance models in visual object tracking. arXiv:1303.4803 [cs.CV] (2013)
  10. 10.
    Young, D.P., Ferryman, J.M.: Pets metrics: on-line performance evaluation service. In: ICCCN 2005 Proceedings of the 14th International Conference on Computer Communications and Networks, pp. 317–324 (2005)Google Scholar
  11. 11.
    Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: CVPR Workshops, pp. 1–8. IEEE (2012)Google Scholar
  12. 12.
    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)CrossRefGoogle Scholar
  13. 13.
    Kasturi, R., Goldgof, D.B., Soundararajan, P., Manohar, V., Garofolo, J.S., Bowers, R., Boonstra, M., Korzhova, V.N., Zhang, J.: Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 319–336 (2009)CrossRefGoogle Scholar
  14. 14.
    Leal-Taixé, L., Milan, A., Reid, I.D., Roth, S., Schindler, K.: Motchallenge 2015: towards a benchmark for multi-target tracking. CoRR abs/1504.01942 (2015)Google Scholar
  15. 15.
    Solera, F., Calderara, S., Cucchiara, R.: Towards the evaluation of reproducible robustness in tracking-by-detection. In: Advanced Video and Signal Based Surveillance, pp. 1–6 (2015)Google Scholar
  16. 16.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Porikli, F., Cehovin, L., Nebehay, G., G., F., Vojir, T., et al.: The visual object tracking vot2013 challenge results. In: ICCV 2013 Workshops, Workshop on Visual Object Tracking Challenge, pp. 98–111 (2013)Google Scholar
  17. 17.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Cehovin, L., Nebehay, G., Vojir, T., G., F., et al.: The visual object tracking vot2014 challenge results. In: ECCV 2014 Workshops, Workshop on Visual Object Tracking Challenge (2014)Google Scholar
  18. 18.
    Kristan, M., Matas, J., Leonardis, A., Felsberg, M., et al.: The visual object tracking vot2015 challenge results. In: ICCV 2015 Workshops, Workshop on Visual Object Tracking Challenge (2015)Google Scholar
  19. 19.
    Kristan, M., Matas, J., Leonardis, A., Vojir, T., Pflugfelder, R., Fernandez, G., Nebehay, G., Porikli, F., Čehovin, L.: A novel performance evaluation methodology for single-target trackers. IEEE Trans. Pattern Anal. Mach. Intell. 2016, to appearGoogle Scholar
  20. 20.
    Čehovin, L., Leonardis, A., Kristan, M.: Visual object tracking performance measures revisited. IEEE Trans. Image Process. 25(3), 1261–1274 (2015)MathSciNetGoogle Scholar
  21. 21.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Computer Vision and Pattern Recognition, vol. 37(9), pp. 1834–1848 (2013)Google Scholar
  22. 22.
    Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. TPAMI 36(7), 1442–1468 (2013)Google Scholar
  23. 23.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE-PAMI (2015)Google Scholar
  24. 24.
    Li, A., Li, M., Wu, Y., Yang, M.H., Yan, S.: Nus-pro: a new visual tracking challenge. IEEE-PAMI 38(2), 335–349 (2015)CrossRefGoogle Scholar
  25. 25.
    Liang, P., Blasch, E., Ling, H.: Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans. Image Process. 24(12), 5630–5644 (2015)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Čehovin, L., Kristan, M., Leonardis, A.: Is my new tracker really better than yours? In: WACV 2014: IEEE Winter Conference on Applications of Computer Vision (2014)Google Scholar
  27. 27.
    Wu, Y., Lim, J., Yang, M.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2014)CrossRefGoogle Scholar
  28. 28.
    Felsberg, M., Berg, A., Häger, G., Ahlberg, J., et al.: The thermal infrared visual object tracking VOT-TIR2015 challenge results. In: ICCV 2015 Workshop Proceedings, VOT 2015 Workshop (2015)Google Scholar
  29. 29.
    Felsberg, M., Kristan, M., Leonardis, A., Matas, J., Pflugfelder, R., et al.: The thermal infrared visual object tracking VOT-TIR2016 challenge results. In: ECCV 2016 Workshop Proceedings, VOT 2016 Workshop (2016)Google Scholar
  30. 30.
    Rother, C., Kolmogorov, V., Blake, A.: “grabcut”: interactive foreground extraction using iterated graph cuts. In: ACM SIGGRAPH 2004 Papers, SIGGRAPH 2004, pp. 309–314. ACM, New York (2004)Google Scholar
  31. 31.
    Byrd, H.R., Gilbert, C.J., Nocedal, J.: A trust region method based on interior point techniques for nonlinear programming. Math. Program. 89(1), 149–185 (2000)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Shanno, D.F.: Conditioning of quasi-newton methods for function minimization. Math. Comput. 24(111), 647–656 (1970)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. CoRR (2015)Google Scholar
  34. 34.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRefGoogle Scholar
  35. 35.
    Van de Weijer, J., Schmid, C., Verbeek, J., Larlus, D.: Learning color names for real-world applications. IEEE Trans. Image Process. 18(7), 1512–1524 (2009)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  37. 37.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M., Atadjanov, I.R. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Heidelberg (2016). doi:10.1007/978-3-319-46454-1_29 CrossRefGoogle Scholar
  38. 38.
    Lee, J.Y., Yu, W.: Visual tracking by partition-based histogram backprojection and maximum support criteria. In: Proceedings of the IEEE International Conference on Robotics and Biomimetic (ROBIO) (2011)Google Scholar
  39. 39.
    Akin, O., Erdem, E., Erdem, A., Mikolajczyk, K.: Deformable part-based tracking by coupled global and local correlation filters. J. Vis. Commun. Image Represent. 38, 763–774 (2016)CrossRefGoogle Scholar
  40. 40.
    Zhu, G., Porikli, F., Li, H.: Beyond local search: Tracking objects everywhere with instance-specific proposals. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  41. 41.
    Gundogdu, E., Alatan, A.A.: Spatial windowing for correlation filter based visual tracking. In: ICIP (2016)Google Scholar
  42. 42.
    González, A., Martín-Nieto, R., Bescós, J., Martínez, J.M.: Single object long-term tracker for smart control of a PTZ camera. In: International Conference on Distributed Smart Cameras, pp. 121–126 (2014)Google Scholar
  43. 43.
    Shi, J., Tomasi, C.: Good features to track. In: Computer Vision and Pattern Recognition, pp. 593–600, June 1994Google Scholar
  44. 44.
    Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. Comput. Vis. Pattern Recogn. 2, 142–149 (2000)Google Scholar
  45. 45.
    Possegger, H., Mauthner, T., Bischof, H.: In defense of color-based model-free tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  46. 46.
    Becker, S., Krah, S.B., Hübner, W., Arens, M.: Mad for visual tracker fusion. SPIE Proceedings Optics and Photonics for Counterterrorism, Crime Fighting, and Defence 9995 (2016, to appear)Google Scholar
  47. 47.
    Henriques, J., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  48. 48.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: Proceedings of the British Machine Vision Conference, BMVC (2014)Google Scholar
  49. 49.
    Wang, N., Li, S., Gupta, A., Yeung, D.Y.: Transferring rich feature hierarchies for robust visual tracking (2015)Google Scholar
  50. 50.
    Vojir, T., Matas, J.: Robustifying the flock of trackers. In: Computer Vision Winter Workshop, pp. 91–97. IEEE (2011)Google Scholar
  51. 51.
    Maresca, M., Petrosino, A.: Clustering local motion estimates for robust and efficient object tracking. In: Proceedings of the Workshop on Visual Object Tracking Challenge, European Conference on Computer Vision (2014)Google Scholar
  52. 52.
    Maresca, M.E., Petrosino, A.: Matrioska: A multi-level approach to fast tracking by learning. In: Proceedings of International Conference on Image Analysis and Processing, pp. 419–428 (2013)Google Scholar
  53. 53.
    Jingjing, X., Stolkin, R., Leonardis, A.: Single target tracking using adaptive clustered decision trees and dynamic multi-level appearance models. In: CVPR (2015)Google Scholar
  54. 54.
    Tang, M., Feng, J.: Multi-kernel correlation filter for visual tracking. In: ICCV (2015)Google Scholar
  55. 55.
    Du, D., Qi, H., Wen, L., Tian, Q., Huang, Q., Lyu, S.: Geometric hypergraph learning for visual tracking. CoRR (2016)Google Scholar
  56. 56.
    Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Heidelberg (2015). doi:10.1007/978-3-319-16181-5_18 Google Scholar
  57. 57.
    Wang, L., Ouyang, W., Wang, X., Lu, H.: Visual tracking with fully convolutional networks. In: ICCV (2015)Google Scholar
  58. 58.
    Wang, L., Ouyang, W., Wang, X., Lu, H.: Stct: sequentially training convolutional networks for visual tracking. In: CVPR (2016)Google Scholar
  59. 59.
    Chen, D., Yuan, Z., Wu, Y., Zhang, G., Zheng, N.: Constructing adaptive complex cells for robust visual tracking. In: ICCV (2013)Google Scholar
  60. 60.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: CVPR (2016)Google Scholar
  61. 61.
    Du, D., Qi, H., Li, W., Wen, L., Huang, Q., Lyu, S.: Online deformable object tracking based on structure-aware hyper-graph. IEEE Trans. Image Process. 25(8), 3572–3584 (2016)MathSciNetCrossRefGoogle Scholar
  62. 62.
    Bertinetto, L., Valmadre, J., Henriques, J., Torr, P.H.S., Vedaldi, A.: Fully convolutional siamese networks for object tracking. In: ECCV Workshops (2016)Google Scholar
  63. 63.
    Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR (2005)Google Scholar
  64. 64.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385 [cs.CV] (2015)
  65. 65.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. IJCV 115, 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  66. 66.
    Vojir, T., Noskova, J., Matas, J.: Robust scale-adaptive mean-shift for tracking. In: Kämäräinen, J.-K., Koskela, M. (eds.) SCIA 2013. LNCS, vol. 7944, pp. 652–663. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  67. 67.
    Vojíř, T., Matas, J.: The enhanced flock of trackers. In: Cipolla, R., Battiato, S., Farinella, G.M. (eds.) Registration and Recognition in Images and Video. SCI, vol. 532, pp. 111–138. Springer, Heidelberg (2014)Google Scholar
  68. 68.
    Ma, C., Yang, X., Zhang, C., Yang, M.H.: Long-term correlation tracking. In: CVPR (2015)Google Scholar
  69. 69.
    Wu, H., Sankaranarayanan, A.C., Chellappa, R.: Online empirical evaluation of tracking algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1443–1458 (2010)CrossRefGoogle Scholar
  70. 70.
    Zhang, J., Ma, S., Sclaroff, S.: Meem: Robust tracking via multiple experts using entropy minimization. In: Computer Vision and Pattern Recognition (2014)Google Scholar
  71. 71.
    Xuehan-Xiong, la Torre, F.D.: Supervised descent method and its application to face alignment. In: Computer Vision and Pattern Recognition (2013)Google Scholar
  72. 72.
    Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Computer Vision and Pattern Recognition (2014)Google Scholar
  73. 73.
    Khan, M.H., Valstar, M.F., Pridmore, T.P.: MTS: a multiple temporal scale tracker handling occlusion and abrupt motion variation. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9007, pp. 476–492. Springer, Heidelberg (2015). doi:10.1007/978-3-319-16814-2_31 Google Scholar
  74. 74.
    Wang, X., Valstar, M., Martinez, B., Khan, H., Pridmore, T.: Tracking by regression with incrementally learned cascades. In: International Conference on Computer Vision (2015)Google Scholar
  75. 75.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  76. 76.
    Lukezic, A., Cehovin, L., Kristan, M.: Deformable parts correlation filters for robust visual tracking. CoRR abs/1605.03720 (2016)Google Scholar
  77. 77.
    Kristan, M., Perš, J., Sulič, V., Kovačič, S.: A graphical model for rapid obstacle image-map estimation from unmanned surface vehicles (2014)Google Scholar
  78. 78.
    Vojir, T.: Fast segmentation of object from background in given bounding box (2015)Google Scholar
  79. 79.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. Comput. Vis. Pattern Recogn. 1, 886–893 (2005)Google Scholar
  80. 80.
    Rosten, E., Drummond, T.W.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  81. 81.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: automatic detection of tracking failures. In: Computer Vision and Pattern Recognition (2010)Google Scholar
  82. 82.
    Poostchi, M., Aliakbarpour, H., Viguier, R., Bunyak, F., Palaniappan, K., Seetharaman, G.: Semantic depth map fusion for moving vehicle detection in aerial video. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 32–40 (2016)Google Scholar
  83. 83.
    Poostchi, M., Palaniappan, K., Bunyak, F., Becchi, M., Seetharaman, G.: Efficient GPU implementation of the integral histogram. In: Park, J.-I., Kim, J. (eds.) ACCV 2012. LNCS, vol. 7728, pp. 266–278. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37410-4_23 CrossRefGoogle Scholar
  84. 84.
    Poostchi, M., Bunyak, F., Palaniappan, K., Seetharaman, G.: Feature selection for appearance-based vehicle tracking in geospatial video. In: SPIE Defense, Security, and Sensing, International Society for Optics and Photonics (2013)Google Scholar
  85. 85.
    Palaniappan, K., Bunyak, F., Kumar, P., Ersoy, I., Jaeger, S., Ganguli, K., Haridas, A., Fraser, J., Rao, R., Seetharaman, G.: Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video. In: IEEE Conference on Information Fusion (FUSION), pp. 1–8 (2010)Google Scholar
  86. 86.
    Pelapur, R., Palaniappan, K., Seetharaman, G.: Robust orientation and appearance adaptation for wide-area large format video object tracking. In: Proceedings of the IEEE Conference on Advanced Video and Signal based Surveillance, pp. 337–342 (2012)Google Scholar
  87. 87.
    Danelljan, M., Khan, F.S., Felsberg, M., Van de Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Computer Vision and Pattern Recognition (2014)Google Scholar
  88. 88.
    Roffo, G., Melzi, S., Cristani, M.: Infinite feature selection. In: ICCV (2015)Google Scholar
  89. 89.
    Roffo, G., Melzi, S.: Online feature selection for visual tracking. In: BMVC (2016)Google Scholar
  90. 90.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML, pp. 689–696 (2009)Google Scholar
  91. 91.
    Montero, A.S., Lang, J., Laganiere, R.: Scalable kernel correlation filter with sparse feature integration. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, pp. 24–31, December 2015Google Scholar
  92. 92.
    Choi, J., Chang, H.J., Jeong, J., Demiris, Y., Choi, J.Y.: Visual tracking using attention-modulated disintegration and integration. In: CVPR (2016)Google Scholar
  93. 93.
    Lebeda, K., Matas, J., Bowden, R.: Tracking the untrackable: how to track when your object is featureless. In: Proceedings of ACCV DTCE (2012)Google Scholar
  94. 94.
    Lebeda, K., Hadfield, S., Matas, J., Bowden, R.: Long-term tracking through failure cases. In: Proceedings of ICCV VOT (2013)Google Scholar
  95. 95.
    Lebeda, K., Hadfield, S., Matas, J., Bowden, R.: Texture-independent long-term tracking using virtual corners. IEEE Trans. Image Process. 25, 359–371 (2016)MathSciNetCrossRefGoogle Scholar
  96. 96.
    Nam, H., Baek, M., Han, B.: Modeling and propagating cnns in a tree structure for visual tracking. CoRR abs/1608.07242 (2016)Google Scholar
  97. 97.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  98. 98.
    Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: ICCV (2016)Google Scholar
  99. 99.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part IV. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  100. 100.
    Vojir, T., Noskova, J., Matas, J.: Robust scale-adaptive mean-shift for tracking. Pattern Recogn. Lett. 49, 250–258 (2014)CrossRefGoogle Scholar
  101. 101.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  102. 102.
    Vojir, T., Matas, J., Noskova, J.: Online adaptive hidden markov model for multi-tracker fusion. CoRR abs/1504.06103 (2015)Google Scholar
  103. 103.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. In: Metaxas, D.N., Quan, L., Sanfeliu, A., Gool, L.J.V. (eds.) International Conference on Computer Vision, pp. 263–270. IEEE (2011)Google Scholar
  104. 104.
    Felsberg, M.: Enhanced distribution field tracking using channel representations. In: Visual Object Tracking Challenge VOT 2013, In conjunction with ICCV 2013 (2013)Google Scholar
  105. 105.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: International Conference on Computer Vision (2015)Google Scholar
  106. 106.
    Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: International Conference on Computer Vision (2015)Google Scholar
  107. 107.
    Čehovin, L., Kristan, M., Leonardis, A.: Robust visual tracking using an adaptive coupled-layer visual model. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 941–953 (2013)CrossRefGoogle Scholar
  108. 108.
    Čehovin, L., Leonardis, A., Kristan, M.: Robust visual tracking using template anchors. In: WACV. IEEE, March 2016Google Scholar
  109. 109.
    Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)CrossRefGoogle Scholar
  110. 110.
    Godec, M., Roth, P.M., Bischof, H.: Hough-based tracking of non-rigid objects. Comp. Vis. Image Underst. 117(10), 1245–1256 (2013)CrossRefGoogle Scholar
  111. 111.
    Zhang, K., Zhang, L., Liu, Q., Zhang, D., Yang, M.-H.: Fast visual tracking via dense spatio-temporal context learning. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 127–141. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10602-1_9 Google Scholar
  112. 112.
    Gao, J., Ling, H., Hu, W., Xing, J.: Transfer learning based visual tracking with gaussian processes regression. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 188–203. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10578-9_13 Google Scholar
  113. 113.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  114. 114.
    Nebehay, G., Pflugfelder, R.: Clustering of static-adaptive correspondences for deformable object tracking. In: Computer Vision and Pattern Recognition (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Matej Kristan
    • 1
  • Aleš Leonardis
    • 2
  • Jiři Matas
    • 3
  • Michael Felsberg
    • 4
  • Roman Pflugfelder
    • 5
  • Luka Čehovin
    • 1
  • Tomáš Vojír̃
    • 3
  • Gustav Häger
    • 4
  • Alan Lukežič
    • 1
  • Gustavo Fernández
    • 5
  • Abhinav Gupta
    • 10
  • Alfredo Petrosino
    • 30
  • Alireza Memarmoghadam
    • 36
  • Alvaro Garcia-Martin
    • 32
  • Andrés Solís Montero
    • 39
  • Andrea Vedaldi
    • 40
  • Andreas Robinson
    • 4
  • Andy J. Ma
    • 18
  • Anton Varfolomieiev
    • 23
  • Aydin Alatan
    • 26
  • Aykut Erdem
    • 16
  • Bernard Ghanem
    • 22
  • Bin Liu
    • 45
  • Bohyung Han
    • 31
  • Brais Martinez
    • 38
  • Chang-Ming Chang
    • 34
  • Changsheng Xu
    • 11
  • Chong Sun
    • 12
  • Daijin Kim
    • 31
  • Dapeng Chen
    • 43
  • Dawei Du
    • 35
  • Deepak Mishra
    • 21
  • Dit-Yan Yeung
    • 19
  • Erhan Gundogdu
    • 7
  • Erkut Erdem
    • 16
  • Fahad Khan
    • 4
  • Fatih Porikli
    • 6
    • 9
    • 29
  • Fei Zhao
    • 11
  • Filiz Bunyak
    • 37
  • Francesco Battistone
    • 30
  • Gao Zhu
    • 9
  • Giorgio Roffo
    • 42
  • Gorthi R. K. Sai Subrahmanyam
    • 21
  • Guilherme Bastos
    • 33
  • Guna Seetharaman
    • 27
  • Henry Medeiros
    • 25
  • Hongdong Li
    • 6
    • 9
  • Honggang Qi
    • 35
  • Horst Bischof
    • 15
  • Horst Possegger
    • 15
  • Huchuan Lu
    • 12
  • Hyemin Lee
    • 31
  • Hyeonseob Nam
    • 28
  • Hyung Jin Chang
    • 20
  • Isabela Drummond
    • 33
  • Jack Valmadre
    • 40
  • Jae-chan Jeong
    • 13
  • Jae-il Cho
    • 13
  • Jae-Yeong Lee
    • 13
  • Jianke Zhu
    • 44
  • Jiayi Feng
    • 11
  • Jin Gao
    • 11
  • Jin Young Choi
    • 8
  • Jingjing Xiao
    • 2
  • Ji-Wan Kim
    • 13
  • Jiyeoup Jeong
    • 8
  • João F. Henriques
    • 40
  • Jochen Lang
    • 39
  • Jongwon Choi
    • 8
  • Jose M. Martinez
    • 32
  • Junliang Xing
    • 11
  • Junyu Gao
    • 11
  • Kannappan Palaniappan
    • 37
  • Karel Lebeda
    • 41
  • Ke Gao
    • 37
  • Krystian Mikolajczyk
    • 20
  • Lei Qin
    • 11
  • Lijun Wang
    • 12
  • Longyin Wen
    • 34
  • Luca Bertinetto
    • 40
  • Madan Kumar Rapuru
    • 21
  • Mahdieh Poostchi
    • 37
  • Mario Maresca
    • 30
  • Martin Danelljan
    • 4
  • Matthias Mueller
    • 22
  • Mengdan Zhang
    • 11
  • Michael Arens
    • 14
  • Michel Valstar
    • 38
  • Ming Tang
    • 11
  • Mooyeol Baek
    • 31
  • Muhammad Haris Khan
    • 38
  • Naiyan Wang
    • 19
  • Nana Fan
    • 17
  • Noor Al-Shakarji
    • 37
  • Ondrej Miksik
    • 40
  • Osman Akin
    • 16
  • Payman Moallem
    • 36
  • Pedro Senna
    • 33
  • Philip H. S. Torr
    • 40
  • Pong C. Yuen
    • 18
  • Qingming Huang
    • 17
    • 35
  • Rafael Martin-Nieto
    • 32
  • Rengarajan Pelapur
    • 37
  • Richard Bowden
    • 41
  • Robert Laganière
    • 39
  • Rustam Stolkin
    • 2
  • Ryan Walsh
    • 25
  • Sebastian B. Krah
    • 14
  • Shengkun Li
    • 34
  • Shengping Zhang
    • 17
  • Shizeng Yao
    • 37
  • Simon Hadfield
    • 41
  • Simone Melzi
    • 42
  • Siwei Lyu
    • 34
  • Siyi Li
    • 19
  • Stefan Becker
    • 14
  • Stuart Golodetz
    • 40
  • Sumithra Kakanuru
    • 21
  • Sunglok Choi
    • 13
  • Tao Hu
    • 35
  • Thomas Mauthner
    • 15
  • Tianzhu Zhang
    • 11
  • Tony Pridmore
    • 38
  • Vincenzo Santopietro
    • 30
  • Weiming Hu
    • 11
  • Wenbo Li
    • 24
  • Wolfgang Hübner
    • 14
  • Xiangyuan Lan
    • 18
  • Xiaomeng Wang
    • 38
  • Xin Li
    • 17
  • Yang Li
    • 44
  • Yiannis Demiris
    • 20
  • Yifan Wang
    • 12
  • Yuankai Qi
    • 17
  • Zejian Yuan
    • 43
  • Zexiong Cai
    • 18
  • Zhan Xu
    • 44
  • Zhenyu He
    • 17
  • Zhizhen Chi
    • 12
  1. 1.University of LjubljanaLjubljanaSlovenia
  2. 2.University of BirminghamBirminghamEngland
  3. 3.Czech Technical UniversityPrahaCzech Republic
  4. 4.Linköping UniversityLinköpingSweden
  5. 5.Austrian Institute of TechnologySeibersdorfAustria
  6. 6.ARC Centre of Excellence for Robotic VisionBrisbaneAustralia
  7. 7.Aselsan Research CenterAnkaraTurkey
  8. 8.ASRISeoulSouth Korea
  9. 9.Australian National UniversityCanberraAustralia
  10. 10.Carnegie Mellon UniversityPittsburghUSA
  11. 11.Chinese Academy of SciencesBeijingChina
  12. 12.Dalian University of TechnologyDalianChina
  13. 13.Electronics and Telecommunications Research InstituteSeoulSouth Korea
  14. 14.Fraunhofer IOSBKarlsruheGermany
  15. 15.Graz University of TechnologyGrazAustria
  16. 16.Hacettepe UniversityÇankayaTurkey
  17. 17.Harbin Institute of TechnologyHarbinChina
  18. 18.Hong Kong Baptist UniversityKowloon TongChina
  19. 19.Hong Kong University of Science and TechnologyHong KongChina
  20. 20.Imperial College LondonLondonEngland
  21. 21.Indian Institute of Space Science and TechnologyThiruvananthapuramIndia
  22. 22.KAUSTThuwalSaudi Arabia
  23. 23.Kyiv Polytechnic InstituteKievUkraine
  24. 24.Lehigh UniversityBethlehemUSA
  25. 25.Marquette UniversityMilwaukeeUSA
  26. 26.Middle East Technical UniversityÇankayaTurkey
  27. 27.Naval Research LabWashington, D.C.USA
  28. 28.NAVER CorporationSeongnamSouth Korea
  29. 29.Data61/CSIROEveleighAustralia
  30. 30.Parthenope University of NaplesNapoliItaly
  31. 31.POSTECHPohangSouth Korea
  32. 32.Universidad Autónoma de MadridMadridSpain
  33. 33.Universidade Federal de ItajubáPinheirinhoBrazil
  34. 34.University at AlbanyAlbanyUSA
  35. 35.University of Chinese Academy of SciencesBeijingChina
  36. 36.University of IsfahanIsfahanIran
  37. 37.University of MissouriColumbiaUSA
  38. 38.University of NottinghamNottinghamEngland
  39. 39.University of OttawaOttawaCanada
  40. 40.University of OxfordOxfordEngland
  41. 41.University of SurreyGuildfordEngland
  42. 42.University of VeronaVeronaItaly
  43. 43.Xi’an Jiaotong UniversityXi’anChina
  44. 44.Zhejiang UniversityHangzhouChina
  45. 45.Moshanghua Technology Co.BeijingChina

Personalised recommendations