Advertisement

Journal of Mathematical Imaging and Vision

, Volume 41, Issue 3, pp 182–193 | Cite as

A Variational Framework for Structure from Motion in Omnidirectional Image Sequences

  • Luigi Bagnato
  • Pascal Frossard
  • Pierre Vandergheynst
Article

Abstract

We address the problem of depth and ego-motion estimation from omnidirectional images. We propose a correspondence-free structure-from-motion problem for sequences of images mapped on the 2-sphere. A novel graph-based variational framework is first proposed for depth estimation between pairs of images. The estimation is cast as a TV-L1 optimization problem that is solved by a fast graph-based algorithm. The ego-motion is then estimated directly from the depth information without explicit computation of the optical flow. Both problems are finally addressed together in an iterative algorithm that alternates between depth and ego-motion estimation for fast computation of 3D information from motion in image sequences. Experimental results demonstrate the effective performance of the proposed algorithm for 3D reconstruction from synthetic and natural omnidirectional images.

Keywords

Structure from motion Ego-motion Depth estimation Omnidirectional Variational 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, A., Chellappa, R.: Robust ego-motion estimation and 3d model refinement using depth based parallax model. In: Proceedings of IEEE International Conference on Image Processing, vol. 4, pp. 2483–2486 (2004). doi: 10.1109/ICIP.2004.1421606 Google Scholar
  2. 2.
    Aujol, J., Gilboa, G., Chan, T., Osher, S.: Structure-texture image decomposition—modeling, algorithms, and parameter selection. Int. J. Comput. Vis. 67(1), 111–136 (2006). doi: 10.1007/s11263-006-4331-z CrossRefGoogle Scholar
  3. 3.
    Bagnato, L., Frossard, P., Vandergheynst, P.: Optical flow and depth from motion for omnidirectional images using a tv-l1 variational framework on graphs. In: Proceedings of IEEE International Conference on Image Processing, pp. 1469–1472 (2009). doi: 10.1109/ICIP.2009.5414552 Google Scholar
  4. 4.
    Baker, S., Nayar, S.K.: A theory of single-viewpoint catadioptric image formation. Int. J. Comput. Vis. 35, 175–196 (1999). doi: 10.1023/A:1008128724364 CrossRefGoogle Scholar
  5. 5.
    Beauchemin, S., Barron, J.: The computation of optical flow. ACM Comput. Surv. 27(3), 433–466 (1995) CrossRefGoogle Scholar
  6. 6.
    Bruss, A., Horn, B.: Passive navigation. Comput. Vis. Graph. 21(1), 3–20 (1983) CrossRefGoogle Scholar
  7. 7.
    Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004) MathSciNetGoogle Scholar
  8. 8.
    Daniilidis, K., Makadia, A., Bulow, T.: Image processing in catadioptric planes: spatiotemporal derivatives and optical flow computation. In: Proceedings of the Third Workshop on Omnidirectional Vision, pp. 3–10 (2002) CrossRefGoogle Scholar
  9. 9.
    Faugeras, O., Luong, Q.T., Papadopoulo, T.: The Geometry of Multiple Images. MIT Press, New York (2001) zbMATHGoogle Scholar
  10. 10.
    Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Multiscale Model. Simul. 7(3), 1005–1028 (2008). doi: 10.1137/070698592 MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Gluckman, J., Nayar, S.: Ego-motion and omnidirectional cameras. In: Proceedings of Sixth International Conference on Computer Vision, pp. 999–1005 (1998) Google Scholar
  12. 12.
    Hanna, K.: Direct multi-resolution estimation of ego-motion and structure from motion. In: Proceedings of the IEEE Workshop on Visual Motion, pp. 156–162 (1991) CrossRefGoogle Scholar
  13. 13.
    Heeger, D., Jepson, A.: Subspace methods for recovering rigid motion. 1. algorithm and implementation. Int. J. Comput. Vis. 7(2), 95–117 (1992) CrossRefGoogle Scholar
  14. 14.
    Horn, B., Schunck, B.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981) CrossRefGoogle Scholar
  15. 15.
    Horn, B., Weldon, E.: Direct methods for recovering motion. Int. J. Comput. Vis. 2(1), 51–76 (1988) CrossRefGoogle Scholar
  16. 16.
    Jepson, A., Heeger, D.: A fast subspace algorithm for recovering rigid motion. In: Proceedings of the IEEE Workshop on Visual Motion, pp. 124–131 (1991). doi: 10.1109/WVM.1991.212779 CrossRefGoogle Scholar
  17. 17.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. Int. Joint Conf. Artif. Intell. 81, 674–679 (1981) Google Scholar
  18. 18.
    Makadia, A., Geyer, C., Daniilidis, K.: Correspondence-free structure from motion. Int. J. Comput. Vis. 75(3) (2007) Google Scholar
  19. 19.
    Nikolova, M.: A variational approach to remove outliers and impulse noise. J. Math. Imaging Vis. 20, 99–120 (2004) MathSciNetCrossRefGoogle Scholar
  20. 20.
    Peyré, G., Bougleux, S., Cohen, L.: Non-local regularization of inverse problems. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) Computer Vision, ECCV 2008. Lecture Notes in Computer Science, vol. 5304, pp. 57–68. Springer, Berlin/Heidelberg (2008) Google Scholar
  21. 21.
    Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal. Physica D 60, 259–268 (1992) zbMATHCrossRefGoogle Scholar
  22. 22.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002) zbMATHCrossRefGoogle Scholar
  23. 23.
    Sinclair, D., Blake, A., Murray, D.: Robust estimation of egomotion from normal flow. Int. J. Comput. Vis. 13(1), 57–69 (1994) CrossRefGoogle Scholar
  24. 24.
    Tian, T., Tomasi, C., Heeger, D.: Comparison of approaches to egomotion computation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 315–320 (1996) Google Scholar
  25. 25.
    Tosic, I., Bogdanova, I., Frossard, P., Vandergheynst, P.: Multiresolution motion estimation for omnidirectional images. In: Proceedings of EUSIPCO (2005) Google Scholar
  26. 26.
    Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-l1 optical flow. In: Hamprecht, F., Schnörr, C., Jähne, B. (eds.) Pattern Recognition. Lecture Notes in Computer Science, vol. 4713, pp. 214–223. Springer, Berlin/Heidelberg (2007) CrossRefGoogle Scholar
  27. 27.
    Zhou, D., Scholkopf, B.: A regularization framework for learning from graph data. In: ICML Workshop on Statistical Relational Learning, pp. 132–137 (2004) Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Luigi Bagnato
    • 1
  • Pascal Frossard
    • 2
  • Pierre Vandergheynst
    • 3
  1. 1.Signal Processing Laboratory (LTS2 and LTS4), Institute of Electrical EngineeringEcole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland
  2. 2.Signal Processing Laboratory (LTS4), Institute of Electrical EngineeringEcole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland
  3. 3.Signal Processing Laboratory (LTS2), Institute of Electrical EngineeringEcole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland

Personalised recommendations