Advertisement

VR models from epipolar images: An approach to minimize errors in synthesized images

  • Mikio Shinya
  • Takafumi Saito
  • Takeaki Mori
  • Noriyoshi Osumi
Poster Session III
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1352)

Abstract

A new paradigm, the minimization of errors in synthesized images, is introduced to organically combine Computer Vision and Computer Graphics for Virtual Reality applications. Based on it, a powerful algorithm, called the strip DP algorithm, is proposed for epipolar image analysis. The algorithm reconstructs VR models from epipolar images so that the error in the images synthesized from the extracted model is minimized while geometrical consistency is maintained. The dynamic programming technique, adopted as the optimization engine, yields complete optimization at reasonable computation cost in a robust way. The strip DP algorithm is a multi-pass solution to occlusion problems, and, in each pass, it extracts connecting feature lines that are not occluded by undetermined feature lines. Experiments demonstrate its feasibility.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Azarbayejani, A., Galyean, T., Horowitz, B., and Pentland, A “Recursive estimation for CAD model recovery”, proc. 2nd CAD-Based Vision Workshop, (1994).Google Scholar
  2. 2.
    Baker, H. H., and Balles, R., “Generalizing epipolar-plane image analysis on the spatiotemporal surface”, International Journal of Computer Vision, vol. 3, No.1, pp. 33–49 (1989).Google Scholar
  3. 3.
    Balles, R. C., Baker, H. H., and Marimont, D. H., “Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion”, Int. J. Computer Vision, Vol.1, No.1, pp. 7–55 (1987).Google Scholar
  4. 4.
    Debevec, P., Taylor, C., and Malik, J., “Modeling and rendering architecture from photographs: a hybrid geometry-and image-based approach”, proc. Siggraph'96, pp.11–20 (1996).Google Scholar
  5. 5.
    Gortler, S., Grzeszczuk, R., Szeliski, R., and Cohen, M., “The Lumigraph”, proc. Siggraph'96, pp.43–54 (1996).Google Scholar
  6. 6.
    Levoy, M., and Hanrahan, P., “Light field rendering”, proc. Siggraph'96, pp.31–42 (1996).Google Scholar
  7. 7.
    Ohta, Y., and Kanade, T., “Stereo by Intra-and Inter-Scanline Search Using Dynamic Programming”, IEEE PAMI, Vol.7, No.2, pp. 139–154 (1985).Google Scholar
  8. 8.
    Seits, S., and Dyer, C., “View Morphing”, proc. Siggraph'96, pp.21–30 (1996).Google Scholar
  9. 9.
    Yasuno, T., and Suzuki, S., “Occlusion analysis of spatiotemporal images for surface reconstruction”, proc. 4th British Machine Vision Conference pp. 549–558 (1993).Google Scholar
  10. 10.
    Werner, T., Hersch, R, and Hlavac, V., “Rendering real-world objects using view interpolation”, proc. ICCV'95, pp.957–962 (1995).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Mikio Shinya
    • 1
  • Takafumi Saito
    • 1
  • Takeaki Mori
    • 1
  • Noriyoshi Osumi
    • 1
  1. 1.NTT Human Interface LaboratoriesJapan

Personalised recommendations