Surface tracking assessment and interaction in texture space

Open Access
Research Article

Abstract

In this paper, we present a novel approach for assessing and interacting with surface tracking algorithms targeting video manipulation in postproduction. As tracking inaccuracies are unavoidable, we enable the user to provide small hints to the algorithms instead of correcting erroneous results afterwards. Based on 2D mesh warp-based optical flow estimation, we visualize results and provide tools for user feedback in a consistent reference system, texture space. In this space, accurate tracking results are reflected by static appearance, and errors can easily be spotted as apparent change. A variety of established tools can be utilized to visualize and assess the change between frames. User interaction to improve tracking results becomes more intuitive in texture space, as it can focus on a small region rather than a moving object. We show how established tools can be implemented for interaction in texture space to provide a more intuitive interface allowing more effective and accurate user feedback.

Keywords

surface tracking assessment interaction mesh warp optical flow 

Supplementary material

41095_2017_89_MOESM1_ESM.flv (25.9 mb)
Supplementary material, approximately 26480 KB.
41095_2017_89_MOESM2_ESM.flv (26.6 mb)
Supplementary material, approximately 26480 KB.
41095_2017_89_MOESM3_ESM.flv (41.8 mb)
Supplementary material, approximately 26480 KB.

References

  1. [1]
    Imagineer Systems. mocha Pro. 2016. Available at http://www.imagineersystems.com/products/mochapro.Google Scholar
  2. [2]
    Foundry. NUKE. 2016. Available at https://www. foundry.com/products/nuke.Google Scholar
  3. [3]
    The Pixelfarm. PFTrack. 2016. Available at http:// www.thepixelfarm.co.uk/pftrack/.Google Scholar
  4. [4]
    Klose, F.; Ruhl, K.; Lipski, C.; Magnor, M. Flowlab—An interactive tool for editing dense image correspondences. In: Proceedings of the Conference for Visual Media Production, 59–66, 2011.Google Scholar
  5. [5]
    Ruhl, K.; Eisemann, M.; Hilsmann, A.; Eisert, P.; Magnor, M. Interactive scene flow editing for improved image-based rendering and virtual spacetime navigation. In: Proceedings of the 23rd ACM International Conference on Multimedia, 631–640, 2015.CrossRefGoogle Scholar
  6. [6]
    Zhang, C.; Price, B.; Cohen, S.; Yang, R. Highquality stereo video matching via user interaction and space–time propagation. In: Proceedings of the International Conference on 3D Vision, 71–78, 2013.Google Scholar
  7. [7]
    Re:Vision Effects. Twixtor. 2016. Available at http://revisionfx.com/products/twixtor/.Google Scholar
  8. [8]
    Wilkes, L. The role of ocula in stereo post production. Technical Report. The Foundry, 2009.Google Scholar
  9. [9]
    Chrysos, G. G.; Antonakos, E.; Zafeiriou, S.; Snape, P. Offline deformable face tracking in arbitrary videos. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, 1–9, 2015.Google Scholar
  10. [10]
    Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut”: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics Vol. 23, No. 3, 309–314, 2004.CrossRefGoogle Scholar
  11. [11]
    Liao, M.; Gao, J.; Yang, R.; Gong, M. Video stereolization: Combining motion analysis with user interaction. IEEE Transactions on Visualization & Computer Graphics Vol. 18, No. 7, 1079–1088, 2012.CrossRefGoogle Scholar
  12. [12]
    Wang, O.; Lang, M.; Frei, M.; Hornung, A.; Smolic, A.; Gross, M. Stereobrush: Interactive 2D to 3D conversion using discontinuous warps. In: Proceedings of the 8th Eurographics Symposium on Sketch-Based Interfaces and Modeling, 47–54, 2011.CrossRefGoogle Scholar
  13. [13]
    Doron, Y.; Campbell, N. D. F.; Starck, J.; Kautz, J. User directed multi-view-stereo. In: Computer Vision–ACCV 2014 Workshops. Jawahar, C.; Shan, S. Eds. Springer Cham, 299–313, 2014.Google Scholar
  14. [14]
    Bartoli, A.; Zisserman, A. Direct estimation of nonrigid registrations. In: Proceedings of the 15th British Machine Vision Conference, Vol. 2, 899–908, 2004.Google Scholar
  15. [15]
    Zhu, J.; Van Gool, L.; Hoi, S. C. H. Unsupervised face alignment by robust nonrigid mapping. In: Proceedings of the IEEE 12th International Conference on Computer Vision, 1265–1272, 2009.Google Scholar
  16. [16]
    Hilsmann, A.; Eisert, P. Joint estimation of deformable motion and photometric parameters in single view videos. In: Proceedings of the IEEE 12th International Conference on Computer Vision Workshops, 390–397, 2009.Google Scholar
  17. [17]
    Gay-Bellile, V.; Bartoli, A.; Sayd, P. Direct estimation of nonrigid registrations with image-based selfocclusion reasoning. IEEE Transactions on Pattern Analysis & Machine Intelligence Vol. 32, No. 1, 87–104, 2010.CrossRefGoogle Scholar
  18. [18]
    Seibold, C.; Hilsmann, A.; Eisert, P. Model-based motion blur estimation for the improvement of motion tracking. Computer Vision and Image Understanding DOI: 10.1016/j.cviu.2017.03.005, 2017.Google Scholar
  19. [19]
    Hilsmann, A.; Schneider, D. C.; Eisert, P. Image-based tracking of deformable surfaces. In: Object Tracking. InTech, 245–266, 2011.Google Scholar
  20. [20]
    Pilet, J.; Lepetit, V.; Fua, P. Real-time nonrigid surface detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, 822–828, 2005.Google Scholar
  21. [21]
    Brox, T.; Malik, J. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 33, No. 3, 500–513, 2011.CrossRefGoogle Scholar
  22. [22]
    Wedel, A.; Cremers, D.; Pock, T.; Bischof, H. Structure- and motion-adaptive regularization for high accuracy optic flow. In: Proceedings of the IEEE 12th International Conference on Computer Vision, 1663–1668, 2009.Google Scholar
  23. [23]
    Brox, T.; Bruhn, A.; Papenberg, N.; Weickert, J. High accuracy optical flow estimation based on a theory for warping. In: Computer Vision–ECCV 2004. Pajdla, T.; Matas, J. Eds. Springer Berlin Heidelberg, 25–36, 2004.CrossRefGoogle Scholar
  24. [24]
    Hollywood Camera Work. Face Capture dataset. 2016. Available at https://www.hollywoodcamerawork.com/ tracking-plates.html.Google Scholar

Copyright information

© The Author(s) 2014

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Fraunhofer HHIBerlinGermany
  2. 2.Humboldt UniversityBerlinGermany

Personalised recommendations