Skip to main content
Log in

Omnidirectional image-based modeling: three approaches to approximated plenoptic representations

  • Original paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract.

In this paper we present a set of novel methods for image-based modeling using omnidirectional vision sensors. The basic idea is to directly and efficiently acquire plenoptic representations by using omnidirectional vision sensors. The three methods, in order of increasing complexity, are direct memorization, discrete interpolation, and smooth interpolation. Results of these methods are compared visually with ground-truth images taken from a standard camera walking along the same path. The experimental results demonstrate that our methods are successful at generating high-quality virtual images. In particular, the smooth interpolation technique approximates the plenoptic function most closely. A comparative analysis of the computational costs associated with the three methods is also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Correspondence to: H. Ishiguro

(e-mail: ishiguro@sys.wakayama-u.ac.jp)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ishiguro, H., Ng, K., Capella, R. et al. Omnidirectional image-based modeling: three approaches to approximated plenoptic representations. Machine Vision and Applications 14, 94–102 (2003). https://doi.org/10.1007/s00138-002-0103-0

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-002-0103-0

Navigation