Advertisement

Visual Odometry Based Omni-directional Hyperlapse

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 841)

Abstract

The prohibitive amounts of time required to review the large amounts of data captured by surveillance and other cameras has brought into question the very utility of large scale video logging. Yet, one recognizes that such logging and analysis are indispensable to security applications. The only way out of this paradox is to devise expedited browsing, by the creation of hyperlapse. We address the hyperlapse problem for the very challenging category of intensive egomotion which makes the hyperlapse highly jerky. We propose an economical approach for trajectory estimation based on Visual Odometry and implement cost functions to penalize pose and path deviations. Also, this is implemented on data taken by omni-directional camera, so that the viewer can opt to observe any direction while browsing. This requires many innovations, including handling the massive radial distortions and implementing scene stabilization that need to be operated upon the least distorted region of the omni view.

Keywords

Visual Odometry (VO) Omni-directional Camera Omni View Ricoh Theta Camera Path 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

This project is a part of postgraduate dissertation research work at Indian Institute of Technology Kanpur, India, financially supported by the Indian government organization Defence Research and Development Organisation.

References

  1. 1.
  2. 2.
  3. 3.
    Bennett, E.P., McMillan, L.: Computational time-lapse video. ACM Trans. Graph. 26(3), 102 (2007)CrossRefGoogle Scholar
  4. 4.
    Grundmann, M., Kwatra, V., Essa, I.: Auto-directed video stabilization with robust L1 optimal camera paths. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, pp. 225–232 (2011)Google Scholar
  5. 5.
    Joshi, N., Kienzle, W., Toelle, M., Uyttendaele, M., Cohen, M.F.: Real-time hyperlapse creation via optimal frame selection. ACM Trans. Graph. 34(4), 63:1–63:9 (2015)CrossRefGoogle Scholar
  6. 6.
    Karpenko, A.: The technology behind hyperlapse from instagram. http://instagram-engineering.tumblr.com/post/95922900787/hyperlapse
  7. 7.
    Karpenko, A., Jacobs, D., Baek, J., Levoy, M., Virji, S.: Digital video stabilization and rolling shutter correction using gyroscopes (2011)Google Scholar
  8. 8.
    Kopf, J., Cohen, M.F., Szeliski, R.: First-person hyper-lapse videos. ACM Trans. Graph. 33(4), 78:1–78:10 (2014)CrossRefGoogle Scholar
  9. 9.
    Lai, W., Huang, Y., Joshi, N., Buehler, C., Yang, M., Kang, S.B.: Semantic-driven generation of hyperlapse from 360\(^\circ \) video. CoRR abs/1703.10798 (2017)Google Scholar
  10. 10.
    Liu, F., Gleicher, M., Jin, H., Agarwala, A.: Content-preserving warps for 3D video stabilization. ACM Trans. Graph. 28(3), 44:1–44:9 (2009)Google Scholar
  11. 11.
    Liu, F., Gleicher, M., Wang, J., Jin, H., Agarwala, A.: Subspace video stabilization. ACM Trans. Graph. 30(1), 4:1–4:10 (2011)CrossRefGoogle Scholar
  12. 12.
    Matsushita, Y., Ofek, E., Ge, W., Tang, X., Shum, H.Y.: Full-frame video stabilization with motion inpainting. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1150–1163 (2006)CrossRefGoogle Scholar
  13. 13.
    Nistér, D.: An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 756–777 (2004)CrossRefGoogle Scholar
  14. 14.
    Petrovic, N., Jojic, N., Huang, T.S.: Adaptive video fast forward. Multimed. Tools Appl. 26(3), 327–344 (2005)CrossRefGoogle Scholar
  15. 15.
    Scaramuzza, D.: OcamCalib: omnidirectional camera calibration toolbox for matlab. https://sites.google.com/site/scarabotix/ocamcalib-toolbox
  16. 16.
    Scaramuzza, D., Fraundorfer, F.: Visual odometry [tutorial]. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)CrossRefGoogle Scholar
  17. 17.
    Xiong, B., Grauman, K.: Detecting snap points in egocentric video with a web photo prior. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 282–298. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_19CrossRefGoogle Scholar
  18. 18.
    Lee, Y.J., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: CVPR, vol. 2, no. 6, pp. 1346–1353 (2012)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Elecrical EngineeringIndian Institute of Technology KanpurKanpurIndia
  2. 2.Computer Science and Engineering DepartmentIndian Institute of Technology KanpurKanpurIndia

Personalised recommendations