Advertisement

MeshFlow: Minimum Latency Online Video Stabilization

  • Shuaicheng LiuEmail author
  • Ping Tan
  • Lu Yuan
  • Jian Sun
  • Bing Zeng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9910)

Abstract

Many existing video stabilization methods often stabilize videos off-line, i.e. as a postprocessing tool of pre-recorded videos. Some methods can stabilize videos online, but either require additional hardware sensors (e.g., gyroscope) or adopt a single parametric motion model (e.g., affine, homography) which is problematic to represent spatially-variant motions. In this paper, we propose a technique for online video stabilization with only one frame latency using a novel MeshFlow motion model. The MeshFlow is a spatial smooth sparse motion field with motion vectors only at the mesh vertexes. In particular, the motion vectors on the matched feature points are transferred to their corresponding nearby mesh vertexes. The MeshFlow is produced by assigning each vertex an unique motion vector via two median filters. The path smoothing is conducted on the vertex profiles, which are motion vectors collected at the same vertex location in the MeshFlow over time. The profiles are smoothed adaptively by a novel smoothing technique, namely the Predicted Adaptive Path Smoothing (PAPS), which only uses motions from the past. In this way, the proposed method not only handles spatially-variant motions but also works online in real time, offering potential for a variety of intelligent applications (e.g., security systems, robotics, UAVs). The quantitative and qualitative evaluations show that our method can produce comparable results with the state-of-the-art off-line methods.

Keywords

Online video stabilization MeshFlow Vertex profile 

Notes

Acknowledgements

This work is supported by National Nature Science Foundation of China (61502079 and 61370148). Ping Tan is supported by the NSERC Discovery Grant 31-611664, the NSERC Discovery Accelerator Supplement 31-611663.

References

  1. 1.
    Matsushita, Y., Ofek, E., Ge, W., Tang, X., Shum, H.: Full-frame video stabilization with motion inpainting. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1150–1163 (2006)CrossRefGoogle Scholar
  2. 2.
    Liu, F., Gleicher, M., Wang, J., Jin, H., Agarwala, A.: Subspace video stabilization. ACM Trans. Graphics 30(1), 4 (2011)CrossRefGoogle Scholar
  3. 3.
    Grundmann, M., Kwatra, V., Essa, I.: Auto-directed video stabilization with robust l1 optimal camera paths. In: Proceedings of CVPR, pp. 225–232 (2011)Google Scholar
  4. 4.
    Wang, Y., Liu, F., Hsu, P., Lee, T.: Spatially and temporally optimized video stabilization. IEEE Trans. Vis. Comput. Graph. 19, 1354–1361 (2013)CrossRefGoogle Scholar
  5. 5.
    Liu, S., Yuan, L., Tan, P., Sun, J.: Bundled camera paths for video stabilization. ACM Trans. Graphics 32(4), 78 (2013)Google Scholar
  6. 6.
    Karpenko, A., Jacobs, D.E., Baek, J., Levoy, M.: Digital video stabilization and rolling shutter correction using gyroscopes. In: Stanfor. Comput. Scie. Tech. Rep. CSTR 2011–03 (2011)Google Scholar
  7. 7.
    Bell, S., Troccoli, A., Pulli, K.: A non-linear filter for gyroscope-based video stabilization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 294–308. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-10593-2_20 Google Scholar
  8. 8.
    Liu, F., Gleicher, M., Jin, H., Agarwala, A.: Content-preserving warps for 3D video stabilization. ACM Trans. Graphics 28, 44 (2009)Google Scholar
  9. 9.
    Yang, J., Schonfeld, D., Chen, C., Mohamed, M.: Online video stabilization based on particle filters. In: Proceedings of ICIP, pp. 1545–1548 (2006)Google Scholar
  10. 10.
    Bae, J., Hwang, Y., Lim, J.: Semi-online video stabilization using probabilistic keyframe update and inter-keyframe motion smoothing. In: Proceedings of ICIP, pp. 5786–5790 (2014)Google Scholar
  11. 11.
    Jiang, W., Wu, Z., Wus, J., Yu, H.: One-pass video stabilization on mobile devices. In: Proceedings of Multimedia, pp. 817–820 (2014)Google Scholar
  12. 12.
    Liu, S., Wang, Y., Yuan, L., Bu, J., Tan, P., Sun, J.: Video stabilization with a depth camera. In: Proceedings of CVPR, pp. 89–95 (2012)Google Scholar
  13. 13.
    Zhou, Z., Jin, H., Ma, Y.: Plane-based content-preserving warps for video stabilization. In: Proceedings of CVPR, pp. 2299–2306 (2013)Google Scholar
  14. 14.
    Grundmann, M., Kwatra, V., Castro, D., Essa, I.: Calibration-free rolling shutter removal. In: Proceedings of ICCP, pp. 1–8 (2012)Google Scholar
  15. 15.
    Goldstein, A., Fattal, R.: Video stabilization using epipolar geometry. ACM Trans. Graphics 31(5), 126 (2012)CrossRefGoogle Scholar
  16. 16.
    Buehler, C., Bosse, M., McMillan, L.: Non-metric image-based rendering for video stabilization. In: Proceedings of CVPR, vol. 2, pp. 609–614 (2001)Google Scholar
  17. 17.
    Smith, B., Zhang, L., Jin, H., Agarwala, A.: Light field video stabilization. In: Proceedings of ICCV, pp. 341–348 (2009)Google Scholar
  18. 18.
    Ovrén, H., Forssén, P.E.: Gyroscope-based video stabilisation with auto-calibration. In: Proceedings of ICRA, pp. 2090–2097 (2015)Google Scholar
  19. 19.
    Liu, F., Niu, Y., Jin, H.: Joint subspace stabilization for stereoscopic video. In: Proceedings of ICCV, pp. 73–80 (2013)Google Scholar
  20. 20.
    Chen, B., Lee, K., Huang, W., Lin, J.: Capturing intention-based full-frame video stabilization. Comput. Graph. Forum 27(7), 1805–1814 (2008)CrossRefGoogle Scholar
  21. 21.
    Gleicher, M., Liu, F.: Re-cinematography: improving the camera dynamics of casual video. In: ACM Conference on Multimedia, pp. 27–36 (2007)Google Scholar
  22. 22.
    Morimoto, C., Chellappa, R.: Evaluation of image stabilization algorithms. In: Proceedings of ICASSP, vol. 5, pp. 2789–2792 (1998)Google Scholar
  23. 23.
    Bai, J., Agarwala, A., Agrawala, M., Ramamoorthi, R.: User-assisted video stabilization. Comput. Graph. Forum 33(4), 61–70 (2014)CrossRefGoogle Scholar
  24. 24.
    Liu, S., Yuan, L., Tan, P., Sun, J.: Steadyflow: spatially smooth optical flow for video stabilization. In: Proceedings of CVPR, pp. 4209–4216 (2014)Google Scholar
  25. 25.
    Trajković, M., Hedley, M.: Fast corner detection. Image Vis. Comput. 16(2), 75–87 (1998)CrossRefGoogle Scholar
  26. 26.
    Shi, J., Tomasi, C.: Good features to track. In: Proceedings of CVPR, pp. 593–600 (1994)Google Scholar
  27. 27.
    Zou, D., Tan, P.: Coslam: collaborative visual SLAM in dynamic environments. IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 354–366 (2013)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of ISMAR, pp. 225–234 (2007)Google Scholar
  29. 29.
    Sun, D., Roth, S., Black, M.: Secrets of optical flow estimation and their principles. In: Proceedings of CVPR, pp. 2392–2399 (2010)Google Scholar
  30. 30.
    Liu, C.: Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. thesis, MIT (2009)Google Scholar
  31. 31.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Liang, C.K., Chang, L.W., Chen, H.H.: Analysis and compensation of rolling shutter effect. IEEE Trans. Image Process. 17(8), 1323–1330 (2008)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Baker, S., Bennett, E., Kang, S.B., Szeliski, R.: Removing rolling shutter wobble. In: Proceedings of CVPR, pp. 2392–2399 (2010)Google Scholar
  34. 34.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)zbMATHGoogle Scholar
  35. 35.
    Igarashi, T., Moscovich, T., Hughes, J.: As-rigid-as-possible shape manipulation. ACM Trans. Graph. 24(3), 1134–1141 (2005)CrossRefGoogle Scholar
  36. 36.
    Liu, S., Xu, B., Deng, C., Zhu, S., Zeng, B., Gabbouj, M.: A hybrid approach for near-range video stabilization. IEEE Trans. Circ. Syst. Video Technol. PP(99), 1 (2016). http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7457352
  37. 37.
    Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., Yu, W.: StructSLAM: visual SLAM with building structure lines. IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015)CrossRefGoogle Scholar
  38. 38.
    Li, S., Yuan, L., Sun, J., Quan, L.: Dual-feature warping-based motion model estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4283–4291 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Shuaicheng Liu
    • 1
    Email author
  • Ping Tan
    • 2
  • Lu Yuan
    • 3
  • Jian Sun
    • 3
  • Bing Zeng
    • 1
  1. 1.University of Electronic Science and Technology of ChinaChengduChina
  2. 2.Simon Fraser UniversityBurnabyCanada
  3. 3.Microsoft Research AsiaBeijingChina

Personalised recommendations