Skip to main content

VR models from epipolar images: An approach to minimize errors in synthesized images

  • Poster Session III
  • Conference paper
  • First Online:
  • 2669 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1352))

Abstract

A new paradigm, the minimization of errors in synthesized images, is introduced to organically combine Computer Vision and Computer Graphics for Virtual Reality applications. Based on it, a powerful algorithm, called the strip DP algorithm, is proposed for epipolar image analysis. The algorithm reconstructs VR models from epipolar images so that the error in the images synthesized from the extracted model is minimized while geometrical consistency is maintained. The dynamic programming technique, adopted as the optimization engine, yields complete optimization at reasonable computation cost in a robust way. The strip DP algorithm is a multi-pass solution to occlusion problems, and, in each pass, it extracts connecting feature lines that are not occluded by undetermined feature lines. Experiments demonstrate its feasibility.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Azarbayejani, A., Galyean, T., Horowitz, B., and Pentland, A “Recursive estimation for CAD model recovery”, proc. 2nd CAD-Based Vision Workshop, (1994).

    Google Scholar 

  2. Baker, H. H., and Balles, R., “Generalizing epipolar-plane image analysis on the spatiotemporal surface”, International Journal of Computer Vision, vol. 3, No.1, pp. 33–49 (1989).

    Google Scholar 

  3. Balles, R. C., Baker, H. H., and Marimont, D. H., “Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion”, Int. J. Computer Vision, Vol.1, No.1, pp. 7–55 (1987).

    Google Scholar 

  4. Debevec, P., Taylor, C., and Malik, J., “Modeling and rendering architecture from photographs: a hybrid geometry-and image-based approach”, proc. Siggraph'96, pp.11–20 (1996).

    Google Scholar 

  5. Gortler, S., Grzeszczuk, R., Szeliski, R., and Cohen, M., “The Lumigraph”, proc. Siggraph'96, pp.43–54 (1996).

    Google Scholar 

  6. Levoy, M., and Hanrahan, P., “Light field rendering”, proc. Siggraph'96, pp.31–42 (1996).

    Google Scholar 

  7. Ohta, Y., and Kanade, T., “Stereo by Intra-and Inter-Scanline Search Using Dynamic Programming”, IEEE PAMI, Vol.7, No.2, pp. 139–154 (1985).

    Google Scholar 

  8. Seits, S., and Dyer, C., “View Morphing”, proc. Siggraph'96, pp.21–30 (1996).

    Google Scholar 

  9. Yasuno, T., and Suzuki, S., “Occlusion analysis of spatiotemporal images for surface reconstruction”, proc. 4th British Machine Vision Conference pp. 549–558 (1993).

    Google Scholar 

  10. Werner, T., Hersch, R, and Hlavac, V., “Rendering real-world objects using view interpolation”, proc. ICCV'95, pp.957–962 (1995).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Roland Chin Ting-Chuen Pong

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shinya, M., Saito, T., Mori, T., Osumi, N. (1997). VR models from epipolar images: An approach to minimize errors in synthesized images. In: Chin, R., Pong, TC. (eds) Computer Vision — ACCV'98. ACCV 1998. Lecture Notes in Computer Science, vol 1352. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63931-4_251

Download citation

  • DOI: https://doi.org/10.1007/3-540-63931-4_251

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63931-2

  • Online ISBN: 978-3-540-69670-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics