Advertisement

Light-Weight Novel View Synthesis for Casual Multiview Photography

  • Inchang Choi
  • Yeong Beum Lee
  • Dae R. Jeong
  • Insik Shin
  • Min H. KimEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11844)

Abstract

Traditional view synthesis for image-based rendering requires various processes: camera synchronization with professional equipment, geometric calibration, multiview stereo, and surface reconstruction, resulting in heavy computation, in addition to manual user interactions throughout these processes. Therefore, view synthesis has been available exclusively for professional users. In this paper, we address these expensive costs to enable view synthesis for casual users even with mobile-phone cameras. We assume that casual users take multiple photographs using their phone-cameras, which are used for view synthesis. First, without relying on any expensive synchronization hardware, our method can capture synchronous multiview photographs by utilizing a wireless network protocol. Second, our method provides light-weight image-based rendering on the mobile phone, where heavy computational processes, such as estimating geometry proxies, alpha mattes, and inpainted textures, are processed by a server to be shared in an interactable time. Finally, it allows us to render novel view synthesis along a virtual camera path on the mobile devices, enabling bullet-time photography from casual multiview captures.

Keywords

View synthesis Computational photography Multiview 

Notes

Acknowledgements

Min H. Kim acknowledges Korea NRF grants (2019R1A2C3007229, 2013M3A6A-6073718) and additional support by Cross-Ministry Giga KOREA Project (GK17-P0200), Samsung Electronics (SRFC-IT1402-02), ETRI(19ZR1400), and an ICT R&D program of MSIT/IITP of Korea (2016-0-00018).

References

  1. 1.
    Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M.: Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001. ACM (2001)Google Scholar
  2. 2.
    Camps-Mur, D., Garcia-Saavedra, A., Serrano, P.: Device-to-device communications with wi-fi direct: overview and experimentation. IEEE Wirel. Commun. 20(3), 96–104 (2013)CrossRefGoogle Scholar
  3. 3.
    Chang, C.F., Bishop, G., Lastra, A.: Ldi tree: a hierarchical representation for image-based rendering. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1999. ACM Press/Addison-Wesley Publishing Co. (1999)Google Scholar
  4. 4.
    Chen, S.E., Williams, L.: View interpolation for image synthesis. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1993 (1993)Google Scholar
  5. 5.
    Debevec, P., Yu, Y., Borshukov, G.: Efficient view-dependent image-based rendering with projective texture-mapping. In: Drettakis, G., Max, N. (eds.) Rendering Techniques ’98. Eurographics, pp. 105–116. Springer, Vienna (1998)CrossRefGoogle Scholar
  6. 6.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Kang, S.B., Shum, H.Y.: A review of image-based rendering techniques. Institute of Electrical and Electronics Engineers, Inc., June 2000Google Scholar
  8. 8.
    Lee, J.H., Choi, I., Kim, M.H.: Laplacian patch-based image synthesis. In: Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR 2016), pp. 2727–2735. IEEE, Las Vegas (2016)Google Scholar
  9. 9.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  10. 10.
    Macqueen, J.: Some methods for classification and analysis of multivariate observations. In: In 5-th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)Google Scholar
  11. 11.
    McMillan, Jr., L.: An image-based approach to three-dimensional computer graphics. Ph.D. thesis, Chapel Hill, NC, USA (1997). uMI Order No. GAX97-30561Google Scholar
  12. 12.
    Mills, D.L.: Internet time synchronization: the network time protocol. IEEE Trans. Commun. 39(10), 1482–1493 (1991)CrossRefGoogle Scholar
  13. 13.
    Scharstein, D.: Stereo vision for view synthesis. In: Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 852–858, June 1996Google Scholar
  14. 14.
    Seitz, S.M., Dyer, C.R.: View morphing. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, pp. 21–30. ACM, New York (1996)Google Scholar
  15. 15.
    Shade, J., Gortler, S., He, L.W., Szeliski, R.: Layered depth images. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998, pp. 231–242. ACM, New York (1998)Google Scholar
  16. 16.
    Sinha, S., Steedly, D., Szeliski, R.: Piecewise planar stereo for image-based rendering. In: Twelfth IEEE International Conference on Computer Vision (ICCV 2009). IEEE, Kyoto, September 2009Google Scholar
  17. 17.
    Siu, A.M.K., Lau, R.W.H.: Image-based modeling and rendering with geometric proxy. In: Proceedings of the 12th ACM International Conference on Multimedia, New York, NY, USA, 10–16 October 2004, pp. 468–471 (2004)Google Scholar
  18. 18.
    Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: SIGGRAPH Conference Proceedings, pp. 835–846. ACM Press, New York (2006)Google Scholar
  19. 19.
    Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. GPU Game Tools 9(1), 23–34 (2004)CrossRefGoogle Scholar
  20. 20.
    Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment — a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) IWVA 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-44480-7_21CrossRefGoogle Scholar
  21. 21.
    Wang, Y., Wang, J., Chang, S.: Camswarm: instantaneous smartphone camera arrays for collaborative photography. CoRR abs/1507.01148 (2015)Google Scholar
  22. 22.
    Wu, C., Agarwal, S., Curless, B., Seitz, S.M.: Multicore bundle adjustment. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society (2011)Google Scholar
  23. 23.
    Zheng, S., et al: Conditional random fields as recurrent neural networks. CoRR (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Inchang Choi
    • 1
  • Yeong Beum Lee
    • 1
  • Dae R. Jeong
    • 1
  • Insik Shin
    • 1
  • Min H. Kim
    • 1
    Email author
  1. 1.KAIST School of ComputingDaejeonSouth Korea

Personalised recommendations