Advertisement

Visual Coin-Tracking: Tracking of Planar Double-Sided Objects

  • Jonáš ŠerýchEmail author
  • Jiří Matas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11824)

Abstract

We introduce a new video analysis problem – tracking of rigid planar objects in sequences where both their sides are visible. Such coin-like objects often rotate fast with respect to an arbitrary axis producing unique challenges, such as fast incident light and aspect ratio change and rotational motion blur. Despite being common, neither tracking sequences containing coin-like objects nor suitable algorithm have been published.

As a second contribution, we present a novel coin-tracking benchmark containing 17 video sequences annotated with object segmentation masks. Experiments show that the sequences differ significantly from the ones encountered in standard tracking datasets. We propose a baseline coin-tracking method based on convolutional neural network segmentation and explicit pose modeling. Its performance confirms that coin-tracking is an open and challenging problem.

Notes

Acknowledgements

This work was supported by Toyota Motor Europe HS, by CTU student grant SGS17/185/OHK3/3T/13 and Technology Agency of the Czech Republic project TH0301019.

References

  1. 1.
    Bai, S., He, Z., Xu, T.B., Zhu, Z., Dong, Y., Bai, H.: Multi-hierarchical independent correlation filters for visual tracking. arXiv preprint: arXiv:1811.10302 (2018)
  2. 2.
    Bhat, G., Johnander, J., Danelljan, M., Khan, F.S., Felsberg, M.: Unveiling the power of deep tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part II. LNCS, vol. 11206, pp. 483–498. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01216-8_30CrossRefGoogle Scholar
  3. 3.
    Caelles, S., Maninis, K.K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., Van Gool, L.: One-shot video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 221–230 (2017)Google Scholar
  4. 4.
    Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part VII. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_49CrossRefGoogle Scholar
  5. 5.
    Chen, L., et al.: Robust visual tracking for planar objects using gradient orientation pyramid. J. Electron. Imaging 28(1), 1–16 (2019)Google Scholar
  6. 6.
    Chen, Y., Pont-Tuset, J., Montes, A., Van Gool, L.: Blazingly fast video object segmentation with pixel-wise metric learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1189–1198 (2018)Google Scholar
  7. 7.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, June 2009Google Scholar
  8. 8.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  9. 9.
    Hariharan, B., Arbelaez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: International Conference on Computer Vision (ICCV) (2011)Google Scholar
  10. 10.
    Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint: arXiv:1704.04861 (2017)
  11. 11.
    Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. IEEE Trans. Big Data (2019).  https://doi.org/10.1109/TBDATA.2019.2921572
  12. 12.
    Khoreva, A., Benenson, R., Ilg, E., Brox, T., Schiele, B.: Lucid data dreaming for object tracking. In: The DAVIS Challenge on Video Object Segmentation (2017)Google Scholar
  13. 13.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR) (2014)Google Scholar
  14. 14.
    Kristan, M., et al.: The visual object tracking VOT2016 challenge results. In: Hua, G., Jégou, H. (eds.) ECCV 2016, Part II. LNCS, vol. 9914, pp. 777–823. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_54CrossRefGoogle Scholar
  15. 15.
    Kristan, M., et al.: The sixth visual object tracking VOT2018 challenge results. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018, Part I. LNCS, vol. 11129, pp. 3–53. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11009-3_1CrossRefGoogle Scholar
  16. 16.
    Kristan, M., et al.: The visual object tracking VOT2015 challenge results. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1–23 (2015)Google Scholar
  17. 17.
    Kristan, M., et al.: A novel performance evaluation methodology for single-target trackers. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2137–2155 (2016)CrossRefGoogle Scholar
  18. 18.
    Liang, P., Wu, Y., Lu, H., Wang, L., Liao, C., Ling, H.: Planar object tracking in the wild: a benchmark. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 651–658. IEEE (2018)Google Scholar
  19. 19.
    Neoral, M., Šochman, J., Matas, J.: Continual occlusion and optical flow estimation. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018, Part IV. LNCS, vol. 11364, pp. 159–174. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-20870-7_10CrossRefGoogle Scholar
  20. 20.
    Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Computer Vision and Pattern Recognition (2016)Google Scholar
  21. 21.
    Pont-Tuset, J., Perazzi, F., Caelles, S., Arbeláez, P., Sorkine-Hornung, A., Van Gool, L.: The 2017 davis challenge on video object segmentation. arXiv preprint: arXiv:1704.00675v2 (2017)
  22. 22.
    Voigtlaender, P., Leibe, B.: Online adaptation of convolutional neural networks for video object segmentation. In: British Machine Vision Conference (BMVC) (2017)Google Scholar
  23. 23.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  24. 24.
    Xu, N., et al.: YouTube-VOS: sequence-to-sequence video object segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part V. LNCS, vol. 11209, pp. 603–619. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01228-1_36CrossRefGoogle Scholar
  25. 25.
    Zhang, Y., Wang, D., Wang, L., Qi, J., Lu, H.: Learning regression and verification networks for long-term visual tracking. arXiv preprint: arXiv:1809.04320 (2018)
  26. 26.
    Zhu, Z., Wang, Q., Li, B., Wu, W., Yan, J., Hu, W.: Distractor-aware siamese networks for visual object tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part IX. LNCS, vol. 11213, pp. 101–117. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01240-3_7CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.CMP Visual Recognition Group, Department of Cybernetics, Faculty of Electrical EngineeringCzech Technical University in PraguePragueCzech Republic

Personalised recommendations