Skip to main content

Generation of Omnidirectional Image Without Photographer

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1578))

Included in the following conference series:

  • 487 Accesses

Abstract

In order to create a virtual reality (VR) space using omnidirectional images, it is desirable to use images without the photographer’s inclusion. In this study, we propose a method to generate an omnidirectional image without the photographer’s inclusion by using multiple images taken by an omnidirectional camera. In the proposed method, the photographer rotates around the omnidirectional camera and takes several images. Next, we perform feature point matching on the omnidirectional images and unify the appearance of all the images by using the amount of translation calculated from the matching. Finally, the images are combined with graph cut and Poisson image editing to produce an omnidirectional panoramic image without the photographer in it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimizaiton in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004)

    Article  Google Scholar 

  2. Cohen, S.: Background estimation as a labeling problem. In: IEEE International Conference on Computer Vision, pp. 1034–1041 (2018)

    Google Scholar 

  3. Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  4. Fischler, M.A., Bolles, R.C., Bae, S., Yi, J.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  5. Flores, A., Belongie, S.: Removing pedestrians from google street view images. In: International Workshop on Mobile Vision, pp. 53–58 (2010)

    Google Scholar 

  6. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of Alvey Vision Conference, pp. 147–151 (1988)

    Google Scholar 

  7. Jancosek, M., Pajdla, T.: Multi-view reconstruction preserving weakly-supported surfaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3121–3128 (2011)

    Google Scholar 

  8. Kawai, N., Inoue, N., Sato, T., Okura, F., Nakashima, Y., Yokoya, N.: Background estimation for a single omnidirectional image sequence captured with a moving camera. IPSJ Trans. Comput. Vison App. 6, 68–72 (2014)

    Article  Google Scholar 

  9. Kawai, N., Yokoya, N.: Image inpainting considering symmetric patterns. In: IAPR International Conference on Pattern Recognition, pp. 2744–2747 (2012)

    Google Scholar 

  10. Le, T.T., Almansa, A., Gousseau, Y., Masnou, S.: Object removal from complex videos using a few annotations. Comput. Visual Med. 5(3), 267–291 (2019). https://doi.org/10.1007/s41095-019-0145-0

    Article  Google Scholar 

  11. Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003)

    Article  Google Scholar 

  12. Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23–24 (2004)

    Article  Google Scholar 

  13. Wu, C.: VisualSFM: a visual structure from motion system. http://ccwu.me/vsfm/

  14. Xu, R., Li, X., Zhou, B., Loy, C.C.: Deep flow-guided video inpainting. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  15. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was supported by JSPS KAKENHI Grant Numbers JP18H03273, JP18H04116, JP21H03483.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Norihiko Kawai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Noda, R., Kawai, N. (2022). Generation of Omnidirectional Image Without Photographer. In: Sumi, K., Na, I.S., Kaneko, N. (eds) Frontiers of Computer Vision. IW-FCV 2022. Communications in Computer and Information Science, vol 1578. Springer, Cham. https://doi.org/10.1007/978-3-031-06381-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06381-7_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06380-0

  • Online ISBN: 978-3-031-06381-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics