GCPR 2017: Pattern Recognition pp 415-426 | Cite as

Measuring the Accuracy of Object Detectors and Trackers

  • Tobias Böttger
  • Patrick Follmann
  • Michael Fauser
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10496)

Abstract

The accuracy of object detectors and trackers is most commonly evaluated by the Intersection over Union (IoU) criterion. To date, most approaches are restricted to axis-aligned or oriented boxes and, as a consequence, many datasets are only labeled with boxes. Nevertheless, axis-aligned or oriented boxes cannot accurately capture an object’s shape. To address this, a number of densely segmented datasets has started to emerge in both the object detection and the object tracking communities. However, evaluating the accuracy of object detectors and trackers that are restricted to boxes on densely segmented data is not straightforward. To close this gap, we introduce the relative Intersection over Union (rIoU) accuracy measure. The measure normalizes the IoU with the optimal box for the segmentation to generate an accuracy measure that ranges between 0 and 1 and allows a more precise measurement of accuracies. Furthermore, it enables an efficient and easy way to understand scenes and the strengths and weaknesses of an object detection or tracking approach. We display how the new measure can be efficiently calculated and present an easy-to-use evaluation framework. The framework is tested on the DAVIS and the VOT2016 segmentations and has been made available to the community.

References

  1. 1.
    An, S., Peursum, P., Liu, W., Venkatesh, S.: Efficient algorithms for subwindow search in object detection and localization. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 264–271, June 2009Google Scholar
  2. 2.
    Bao, C., Yi, W., Ling, H., Ji, H.: Real time robust L1 tracker using accelerated proximal gradient approach. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1830–1837 (2012)Google Scholar
  3. 3.
    Böttger, T., Eisenhofer, C.: Efficiently tracking extremal regions in multichannel images. In: International Conference on Pattern Recognition Systems (ICPRS) (2017)Google Scholar
  4. 4.
    Böttger, T., Ulrich, M., Steger, C.: Subpixel-precise tracking of rigid objects in real-time. In: Sharma, P., Bianchi, F.M. (eds.) SCIA 2017. LNCS, vol. 10269, pp. 54–65. Springer, Cham (2017). doi: 10.1007/978-3-319-59126-1_5 CrossRefGoogle Scholar
  5. 5.
    Caelles, S., Maninis, K.-K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., Van Gool, L.: One-shot video object segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  6. 6.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference (2014)Google Scholar
  7. 7.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016). doi: 10.1007/978-3-319-46454-1_29 CrossRefGoogle Scholar
  8. 8.
    Everingham, M., Ali Eslami, S.M., Van Gool, L.J., Williams, C.K.I., Winn, J.M., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)CrossRefGoogle Scholar
  9. 9.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  10. 10.
    Juránek, R., Herout, A., Dubská, M., Zemcík, P.: Real-time pose estimation piggybacked on object detection. In: IEEE International Conference on Computer Vision, pp. 2381–2389 (2015)Google Scholar
  11. 11.
    Kristan, M., et al.: The visual object tracking VOT2016 challenge results. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 777–823. Springer, Cham (2016). doi: 10.1007/978-3-319-48881-3_54 CrossRefGoogle Scholar
  12. 12.
    Kristan, M., Matas, J., Leonardis, A., Vojír, T., Pflugfelder, R.P., Fernández, G., Nebehay, G., Porikli, F., Čehovin, L.: A novel performance evaluation methodology for single-target trackers. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2137–2155 (2016)CrossRefGoogle Scholar
  13. 13.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). doi: 10.1007/978-3-319-10602-1_48 Google Scholar
  14. 14.
    Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K.: MOT16: a benchmark for multi-object tracking. arXiv:1603.00831 [cs], March 2016
  15. 15.
    Nawaz, T., Cavallaro, A.: A protocol for evaluating video trackers under real-world conditions. IEEE Trans. Image Process. 22(4), 1354–1361 (2013)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Perazzi, F., Jordi Pont-Tuset, B., McWilliams, L.J., Gool, V., Gross, M.H., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 724–732 (2016)Google Scholar
  17. 17.
    Rosin, P.L.: Measuring rectangularity. Mach. Vis. Appl. 11(4), 191–196 (1999)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)CrossRefGoogle Scholar
  19. 19.
    Čehovin, L., Kristan, M., Leonardis, A.: Robust visual tracking using an adaptive coupled-layer visual model. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 941–953 (2013)CrossRefGoogle Scholar
  20. 20.
    Čehovin, L., Kristan, M., Leonardis, A.: Is my new tracker really better than yours? In: IEEE Winter Conference on Applications of Computer Vision, pp. 540–547 (2014)Google Scholar
  21. 21.
    Čehovin, L., Leonardis, A., Kristan, M.: Robust visual tracking using template anchors. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1–8 (2016)Google Scholar
  22. 22.
    Čehovin, L., Leonardis, A., Kristan, M.: Visual object tracking performance measures revisited. IEEE Trans. Image Process. 25(3), 1261–1274 (2016)MathSciNetGoogle Scholar
  23. 23.
    Vojir, T., Matas, J.: Pixel-wise object segmentations for the VOT 2016 dataset. Research report CTU-CMP-2017-01, Center for Machine Perception, Czech Technical University, Prague, Czech Republic, January 2017Google Scholar
  24. 24.
    Yi, W., Lim, J., Yang, M.-H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Tobias Böttger
    • 1
  • Patrick Follmann
    • 1
  • Michael Fauser
    • 1
  1. 1.MVTec Software GmbHMunichGermany

Personalised recommendations