The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results

  • Michael Felsberg
  • Matej Kristan
  • Jiři Matas
  • Aleš Leonardis
  • Roman Pflugfelder
  • Gustav Häger
  • Amanda Berg
  • Abdelrahman Eldesokey
  • Jörgen Ahlberg
  • Luka Čehovin
  • Tomáš Vojír̃
  • Alan Lukežič
  • Gustavo Fernández
  • Alfredo Petrosino
  • Alvaro Garcia-Martin
  • Andrés Solís Montero
  • Anton Varfolomieiev
  • Aykut Erdem
  • Bohyung Han
  • Chang-Ming Chang
  • Dawei Du
  • Erkut Erdem
  • Fahad Shahbaz Khan
  • Fatih Porikli
  • Fei Zhao
  • Filiz Bunyak
  • Francesco Battistone
  • Gao Zhu
  • Guna Seetharaman
  • Hongdong Li
  • Honggang Qi
  • Horst Bischof
  • Horst Possegger
  • Hyeonseob Nam
  • Jack Valmadre
  • Jianke Zhu
  • Jiayi Feng
  • Jochen Lang
  • Jose M. Martinez
  • Kannappan Palaniappan
  • Karel Lebeda
  • Ke Gao
  • Krystian Mikolajczyk
  • Longyin Wen
  • Luca Bertinetto
  • Mahdieh Poostchi
  • Mario Maresca
  • Martin Danelljan
  • Michael Arens
  • Ming Tang
  • Mooyeol Baek
  • Nana Fan
  • Noor Al-Shakarji
  • Ondrej Miksik
  • Osman Akin
  • Philip H. S. Torr
  • Qingming Huang
  • Rafael Martin-Nieto
  • Rengarajan Pelapur
  • Richard Bowden
  • Robert Laganière
  • Sebastian B. Krah
  • Shengkun Li
  • Shizeng Yao
  • Simon Hadfield
  • Siwei Lyu
  • Stefan Becker
  • Stuart Golodetz
  • Tao Hu
  • Thomas Mauthner
  • Vincenzo Santopietro
  • Wenbo Li
  • Wolfgang Hübner
  • Xin Li
  • Yang Li
  • Zhan Xu
  • Zhenyu He
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9914)

Abstract

The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.

Keywords

Performance evaluation Object tracking Thermal IR VOT 

References

  1. 1.
    Gavrila, D.M.: The visual analysis of human movement: a survey. Comp. Vis. Image Underst. 73(1), 82–98 (1999)CrossRefMATHGoogle Scholar
  2. 2.
    Moeslund, T.B., Hilton, A., Kruger, V.: A survey of advances in vision-based human motion capture and analysis. Comp. Vis. Image Underst. 103(2–3), 90–126 (2006)CrossRefGoogle Scholar
  3. 3.
    Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A.R., Van den Hengel, A.: A survey of appearance models in visual object tracking arXiv:1303.4803 [cs.CV] (2013)
  4. 4.
    Young, D.P., Ferryman, J.M.: Pets metrics: On-line performance evaluation service. In: ICCCN 2005 Proceedings of the 14th International Conference on Computer Communications and Networks, pp. 317–324 (2005)Google Scholar
  5. 5.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Porikli, F., Cehovin, L., Nebehay, G., Fernández, G., Vojir, T. et al.: The visual object tracking vot2013 challenge results. In: ICCV 2013 Workshops, Workshop on Visual Object Tracking Challenge, pp. 98–111 (2013)Google Scholar
  6. 6.
    Kristan, M., et al.: The visual object tracking vot2014 challenge results. In: Agapito, L., et al. (eds.) ECCV 2014 Workshops. LNCS, vol. 8926, pp. 191–217. Springer, Heidelberg (2014)Google Scholar
  7. 7.
    Kristan, M., Matas, J., Leonardis, A., Felsberg, M., et al.: The visual object tracking vot2015 challenge results. In: ICCV 2015 Workshops, Workshop on Visual Object Tracking Challenge (2015)Google Scholar
  8. 8.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  9. 9.
    Wu, Y., Lim, J., Yang, M.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  10. 10.
    Gade, R., Moeslund, T.B.: Thermal cameras and applications: a survey. Mach. Vis. Appl. 25(1), 245–262 (2014)CrossRefGoogle Scholar
  11. 11.
    Felsberg, M., Berg, A., Häger, G., Ahlberg, J., et al.: The thermal infrared visual object tracking VOT-TIR2015 challenge results. In: ICCV 2015 Workshop Proceedings, VOT 2015 Workshop (2015)Google Scholar
  12. 12.
    Kristan, M., Leonardis, A., Matas, J., Felsberg, M., Pflugfelder, R., et al.: The visual object tracking VOT2016 challenge results. In: Jegou, H., Hua, G. (eds.) ECCV 2016 Workshops. LNCS, vol. 9914, pp. 777–823. Springer, Heidelberg (2016)Google Scholar
  13. 13.
    Čehovin, L., Kristan, M., Leonardis, A.: Is my new tracker really better than yours? In: WACV 2014: IEEE Winter Conference on Applications of Computer Vision (2014)Google Scholar
  14. 14.
    Čehovin, L., Leonardis, A., Kristan, M.: Visual object tracking performance measures revisited arXiv:1502.05803 [cs.CV] (2013)
  15. 15.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Porikli, F., Cehovin, L., Nebehay, G., Fernandez, G., Vojir, T.: The vot2013 challenge: overview and additional results. In: Computer Vision Winter Workshop (2014)Google Scholar
  16. 16.
    Berg, A., Ahlberg, J., Felsberg, M.: A thermal object tracking benchmark. In: 12th IEEE International Conference on Advanced Video- and Signal-based Surveillance, Karlsruhe, Germany, 25–28 August 2015. IEEE (2015)Google Scholar
  17. 17.
    Berg, A., Ahlberg, J., Felsberg, M.: Channel coded distribution field tracking for thermal infrared imagery. In: IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS) (2016)Google Scholar
  18. 18.
    Felsberg, M.: Enhanced distribution field tracking using channel representations. In: Visual Object Tracking Challenge VOT 2013, In conjunction with ICCV 2013 (2013)Google Scholar
  19. 19.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Porikli, F., Čehovin, L., Nebehay, G., Fernandez, G., Vojir, T., Gatt, A., Khajenezhad, A., Salahledin, A., Soltani-Farani, A., Zarezade, A., Petrosino, A., Milton, A., Bozorgtabar, B., Li, B., Chan, C.S., Heng, C., Ward, D., Kearney, D., Monekosso, D., Karaimer, H.C., Rabiee, H.R., Zhu, J., Gao, J., Xiao, J., Zhang, J., Xing, J., Huang, K., Lebeda, K., Cao, L., Maresca, M.E., Lim, M.K., Helw, M.E., Felsberg, M., Remagnino, P., Bowden, R., Goecke, R., Stolkin, R., Lim, S.Y., Maher, S., Poullot, S., Wong, S., Satoh, S., Chen, W., Hu, W., Zhang, X., Li, Y., Niu, Z.: The Visual Object Tracking VOT2013 challenge results. In: ICCV Workshops, pp. 98–111 (2013)Google Scholar
  20. 20.
    Zitnick, C.L., Dollár, P.: Edge boxes: locating object proposals from edges. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 391–405. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10602-1_26 Google Scholar
  21. 21.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  22. 22.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  23. 23.
    Henriques, J., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  24. 24.
    Akin, O., Erdem, E., Erdem, A., Mikolajczyk, K.: Deformable part-based tracking by coupled global and local correlation filters. J. Vis. Commun. Image Represent. 38, 763–774 (2016)CrossRefGoogle Scholar
  25. 25.
    Zhu, G., Porikli, F., Li, H.: Beyond local search: tracking objects everywhere with instance-specific proposals. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  26. 26.
    Vojir, T., Matas, J.: Robustifying the flock of trackers. In: Computer Vision Winter Workshop, pp. 91–97. IEEE (2011)Google Scholar
  27. 27.
    Maresca, M., Petrosino, A.: Clustering local motion estimates for robust and efficient object tracking. In: Agapito, L., et al. (eds.) ECCV 2014 Workshops. LNCS, vol. 8926, pp. 244–253. Springer, Heidelberg (2014)Google Scholar
  28. 28.
    Becker, S., Krah, S.B., Hübner, W., Arens, M.: Mad for visual tracker fusion. In: SPIE Proceedings Optics and Photonics for Counterterrorism, Crime Fighting, and Defence 9995 (2016, to appear)Google Scholar
  29. 29.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: Proceedings of the British Machine Vision Conference (2014)Google Scholar
  30. 30.
    González, A., Martín-Nieto, R., Bescós, J., Martínez, J.M.: Single object long-term tracker for smart control of a PTZ camera. In: International Conference on Distributed Smart Cameras, pp. 121–126 (2014)Google Scholar
  31. 31.
    Shi, J., Tomasi, C.: Good features to track. In: Computer Vision and Pattern Recognition, pp. 593–600, June 1994Google Scholar
  32. 32.
    Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. Comp. Vis. Patt. Recogn. 2, 142–149 (2000)Google Scholar
  33. 33.
    Du, D., Qi, H., Wen, L., Tian, Q., Huang, Q., Lyu, S.: Geometric hypergraph learning for visual tracking. In: CoRR (2016)Google Scholar
  34. 34.
    Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., et al. (eds.) ECCV 2014 Workshop. LNCS, vol. 8926, pp. 254–265. Springer, Heidelberg (2014)Google Scholar
  35. 35.
    Possegger, H., Mauthner, T., Bischof, H.: In defense of color-based model-free tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  36. 36.
    Tang, M., Feng, J.: Multi-kernel correlation filter for visual tracking. In: ICCV (2015)Google Scholar
  37. 37.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  38. 38.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.: Staple: Complementary learners for real-time tracking arXiv:1512.01355 [cs.CV] (2015)
  39. 39.
    Du, D., Qi, H., Li, W., Wen, L., Huang, Q., Lyu, S.: Online deformable object tracking based on structure-aware hyper-graph. IEEE Trans. Image Process. 25(8), 3572–3584 (2016)MathSciNetCrossRefGoogle Scholar
  40. 40.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: CVPR (2016)Google Scholar
  41. 41.
    Vojíř, T., Matas, J.: The enhanced flock of trackers. In: Cipolla, R., Battiato, S., Farinella, G.M. (eds.) Registration and Recognition in Images and Video. SCI, vol. 532, pp. 111–138. Springer, Heidelberg (2014)Google Scholar
  42. 42.
    Rosten, E., Drummond, T.W.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  43. 43.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: Automatic detection of tracking failures. In: Computer Vision and Pattern Recognition (2010)Google Scholar
  44. 44.
    Pelapur, R., Candemir, S., Bunyak, F., Poostchi, M., Seetharaman, G., Palaniappan, K.: Persistent target tracking using likelihood fusion in wide-area and full motion video sequences. In: IEEE Conference on Information Fusion (FUSION), pp. 2420–2427 (2012)Google Scholar
  45. 45.
    Poostchi, M., Aliakbarpour, H., Viguier, R., Bunyak, F., Palaniappan, K., Seetharaman, G.: Semantic depth map fusion for moving vehicle detection in aerial video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 32–40 (2016)Google Scholar
  46. 46.
    Poostchi, M., Palaniappan, K., Bunyak, F., Becchi, M., Seetharaman, G.: Efficient GPU implementation of the integral histogram. In: Park, J.I., Kim, J. (eds.) ACCV 2012 Workshops. LNCS, vol. 7728, pp. 266–278. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  47. 47.
    Palaniappan, K., Bunyak, F., Kumar, P., Ersoy, I., Jaeger, S., Ganguli, K., Haridas, A., Fraser, J., Rao, R., Seetharaman, G.: Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video. In: IEEE Conference on Information Fusion (FUSION), pp. 1–8 (2010)Google Scholar
  48. 48.
    Pelapur, R., Palaniappan, K., Seetharaman, G.: Robust orientation and appearance adaptation for wide-area large format video object tracking. In: Proceedings of the IEEE Conference on Advanced Video and Signal based Surveillance, pp. 337–342 (2012)Google Scholar
  49. 49.
    Montero, A.S., Lang, J., Laganiere, R.: Scalable kernel correlation filter with sparse feature integration. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, pp. 24–31, December 2015Google Scholar
  50. 50.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: CoRR (2015)Google Scholar
  51. 51.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  52. 52.
    Lebeda, K., Matas, J., Bowden, R.: Tracking the untrackable: how to track when your object is featureless. In: Proceedings of ACCV DTCE (2012)Google Scholar
  53. 53.
    Lebeda, K., Hadfield, S., Matas, J., Bowden, R.: Texture-independent long-term tracking using virtual corners. IEEE Trans. Image Process. 25(1), 359–371 (2016)MathSciNetCrossRefGoogle Scholar
  54. 54.
    Lukezic, A., Cehovin, L., Kristan, M.: Deformable parts correlation filters for robust visual tracking. CoRR abs/1605.03720 (2016)Google Scholar
  55. 55.
    Nam, H., Baek, M., Han, B.: Modeling and propagating cnns in a tree structure for visual tracking. CoRR abs/1608.07242 (2016)Google Scholar
  56. 56.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: International Conference on Computer Vision (2015)Google Scholar
  57. 57.
    Danelljan, M., Khan, F.S., Felsberg, M., Van de Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Computer Vision Pattern Recognition (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Michael Felsberg
    • 1
  • Matej Kristan
    • 2
  • Jiři Matas
    • 3
  • Aleš Leonardis
    • 4
  • Roman Pflugfelder
    • 5
  • Gustav Häger
    • 1
  • Amanda Berg
    • 1
    • 6
  • Abdelrahman Eldesokey
    • 1
  • Jörgen Ahlberg
    • 1
    • 6
  • Luka Čehovin
    • 2
  • Tomáš Vojír̃
    • 3
  • Alan Lukežič
    • 2
  • Gustavo Fernández
    • 5
  • Alfredo Petrosino
    • 20
  • Alvaro Garcia-Martin
    • 22
  • Andrés Solís Montero
    • 25
  • Anton Varfolomieiev
    • 15
  • Aykut Erdem
    • 12
  • Bohyung Han
    • 21
  • Chang-Ming Chang
    • 23
  • Dawei Du
    • 9
  • Erkut Erdem
    • 12
  • Fahad Shahbaz Khan
    • 1
  • Fatih Porikli
    • 7
    • 8
    • 19
  • Fei Zhao
    • 9
  • Filiz Bunyak
    • 24
  • Francesco Battistone
    • 20
  • Gao Zhu
    • 8
  • Guna Seetharaman
    • 17
  • Hongdong Li
    • 7
    • 8
  • Honggang Qi
    • 9
  • Horst Bischof
    • 11
  • Horst Possegger
    • 11
  • Hyeonseob Nam
    • 18
  • Jack Valmadre
    • 26
  • Jianke Zhu
    • 28
  • Jiayi Feng
    • 9
  • Jochen Lang
    • 25
  • Jose M. Martinez
    • 22
  • Kannappan Palaniappan
    • 24
  • Karel Lebeda
    • 27
  • Ke Gao
    • 24
  • Krystian Mikolajczyk
    • 14
  • Longyin Wen
    • 23
  • Luca Bertinetto
    • 26
  • Mahdieh Poostchi
    • 24
  • Mario Maresca
    • 20
  • Martin Danelljan
    • 1
  • Michael Arens
    • 10
  • Ming Tang
    • 9
  • Mooyeol Baek
    • 21
  • Nana Fan
    • 13
  • Noor Al-Shakarji
    • 24
  • Ondrej Miksik
    • 26
  • Osman Akin
    • 12
  • Philip H. S. Torr
    • 26
  • Qingming Huang
    • 9
  • Rafael Martin-Nieto
    • 22
  • Rengarajan Pelapur
    • 24
  • Richard Bowden
    • 27
  • Robert Laganière
    • 25
  • Sebastian B. Krah
    • 10
  • Shengkun Li
    • 23
  • Shizeng Yao
    • 24
  • Simon Hadfield
    • 27
  • Siwei Lyu
    • 23
  • Stefan Becker
    • 10
  • Stuart Golodetz
    • 26
  • Tao Hu
    • 9
  • Thomas Mauthner
    • 11
  • Vincenzo Santopietro
    • 20
  • Wenbo Li
    • 16
  • Wolfgang Hübner
    • 10
  • Xin Li
    • 13
  • Yang Li
    • 28
  • Zhan Xu
    • 28
  • Zhenyu He
    • 13
  1. 1.Linköping UniversityLinköpingSweden
  2. 2.University of LjubljanaLjubljanaSlovenia
  3. 3.Czech Technical UniversityPragueCzech Republic
  4. 4.University of BirminghamBirminghamEngland
  5. 5.Austrian Institute of TechnologySeibersdorfAustria
  6. 6.Termisk Systemteknik ABLinköpingSweden
  7. 7.ARC Centre of Excellence for Robotic VisionCanberraAustralia
  8. 8.Australian National UniversityCanberraAustralia
  9. 9.Chinese Academy of SciencesBeijingChina
  10. 10.Fraunhofer IOSBKarlsruheGermany
  11. 11.Graz University of TechnologyGrazAustria
  12. 12.Hacettepe UniversityAnkaraTurkey
  13. 13.Harbin Institute of TechnologyHarbinChina
  14. 14.Imperial College LondonLondonUK
  15. 15.Kyiv Polytechnic InstituteKievUkraine
  16. 16.Lehigh UniversityBethlehemUSA
  17. 17.Naval Research LabWashington, D.C.USA
  18. 18.NAVER Corp.SeongnamSouth Korea
  19. 19.Data61/CSIROAlexandriaAustralia
  20. 20.Parthenope University of NaplesNaplesItaly
  21. 21.POSTECHPohangSouth Korea
  22. 22.Universidad Autónoma de MadridMadridSpain
  23. 23.University at AlbanyAlbanyUSA
  24. 24.University of MissouriColumbiaUSA
  25. 25.University of OttawaOttawaCanada
  26. 26.University of OxfordOxfordEngland
  27. 27.University of SurreyGuildfordEngland
  28. 28.Zhejiang UniversityHangzhouChina

Personalised recommendations