Abstract.
In this paper we present a set of novel methods for image-based modeling using omnidirectional vision sensors. The basic idea is to directly and efficiently acquire plenoptic representations by using omnidirectional vision sensors. The three methods, in order of increasing complexity, are direct memorization, discrete interpolation, and smooth interpolation. Results of these methods are compared visually with ground-truth images taken from a standard camera walking along the same path. The experimental results demonstrate that our methods are successful at generating high-quality virtual images. In particular, the smooth interpolation technique approximates the plenoptic function most closely. A comparative analysis of the computational costs associated with the three methods is also presented.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Correspondence to: H. Ishiguro
(e-mail: ishiguro@sys.wakayama-u.ac.jp)
Rights and permissions
About this article
Cite this article
Ishiguro, H., Ng, K., Capella, R. et al. Omnidirectional image-based modeling: three approaches to approximated plenoptic representations. Machine Vision and Applications 14, 94–102 (2003). https://doi.org/10.1007/s00138-002-0103-0
Issue Date:
DOI: https://doi.org/10.1007/s00138-002-0103-0