Skip to main content

LiteAR: A Framework to Estimate Lighting for Mixed Reality Sessions for Enhanced Realism

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2022)

Abstract

We propose an end-to-end learning based method to estimate irradiance in real-time given a single input limited field of view image from a mobile phone camera. We further develop a technique inspired by physically based rendering to take advantage of spatially varying environment to illuminate virtual objects in augmented reality sessions to make them look more realistic. We integrate the Inertial Measurement Unit sensor to dynamically estimate illumination, making the mixed reality experience interactive. Our solution runs in real-time on mobile phones, with significantly lower computational requirements and enhanced realism in comparison to state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Althelaya, K.A., Agus, M., Schneider, J.: The mixture graph-a data structure for compressing, rendering, and querying segmentation histograms. IEEE Trans. Vis. Comput. Graph. 27, 645–655 (2021)

    Article  Google Scholar 

  2. Calian, D.A., Lalonde, J.F., Gotardo, P., Simon, T., Matthews, I., Mitchell, K.: From faces to outdoor light probes. In: Computer Graphics Forum, vol. 37, pp. 51–61. Wiley Online Library (2018)

    Google Scholar 

  3. Chang, A., et al.: Matterport3D: learning from RGB-D data in indoor environments. arXiv preprint arXiv:1709.06158 (2017)

  4. Cheng, D., Shi, J., Chen, Y., Deng, X., Zhang, X.: Learning scene illumination by pairwise photos from rear and front mobile cameras. In: Computer Graphics Forum, vol. 37, pp. 213–221. Wiley Online Library (2018)

    Google Scholar 

  5. Debevec, P.: Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: ACM SIGGRAPH 2008 Classes, pp. 1–10 (2008)

    Google Scholar 

  6. Debevec, P., Graham, P., Busch, J., Bolas, M.: A single-shot light probe. In: ACM SIGGRAPH 2012 Talks, p. 1 (2012)

    Google Scholar 

  7. Gardner, M.A., et al.: Learning to predict indoor illumination from a single image. arXiv preprint arXiv:1704.00090 (2017)

  8. Greenspan, M., Yurick, M.: Approximate KD tree search for efficient ICP. In: Fourth International Conference on 3-D Digital Imaging and Modeling 2003, 3DIM 2003. Proceedings, pp. 442–448. IEEE (2003)

    Google Scholar 

  9. Groß, D., Gumhold, S.: Advanced rendering of line data with ambient occlusion and transparency. IEEE Trans. Vis. Comput. Graph. 27, 614–624 (2021)

    Article  Google Scholar 

  10. Hambarde, P., Murala, S.: S2DNet: depth estimation from single image and sparse samples. IEEE Trans. Comput. Imaging 6, 806–817 (2020)

    Article  Google Scholar 

  11. Karsch, K., Hedau, V., Forsyth, D., Hoiem, D.: Rendering synthetic objects into legacy photographs. ACM Trans. Graph. (TOG) 30(6), 1–12 (2011)

    Article  Google Scholar 

  12. LeGendre, C., et al.: DeepLight: learning illumination for unconstrained mobile mixed reality. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5918–5928 (2019)

    Google Scholar 

  13. Liu, F., Shen, C., Lin, G.: Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5162–5170 (2015)

    Google Scholar 

  14. Liu, W., Sun, J., Li, W., Hu, T., Wang, P.: Deep learning on point clouds and its application: a survey. Sensors 19(19), 4188 (2019)

    Article  Google Scholar 

  15. Mandl, D., et al.: Learning lightprobes for mixed reality illumination. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 82–89. IEEE (2017)

    Google Scholar 

  16. Nowrouzezahrai, D., Simari, P., Fiume, E.: Sparse zonal harmonic factorization for efficient SH rotation. ACM Trans. Graph. (TOG) 31(3), 1–9 (2012)

    Article  Google Scholar 

  17. Prakash, S., Bahremand, A., Nguyen, L.D., LiKamWa, R.: GLEAM: an illumination estimation framework for real-time photorealistic augmented reality on mobile devices. In: Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, pp. 142–154 (2019)

    Google Scholar 

  18. Robert, C.P., Casella, G.: Monte Carlo integration. In: Robert, C.P., Casella, G. (eds.) Monte Carlo Statistical Methods, pp. 71–138. Springer, New York (1999). https://doi.org/10.1007/978-1-4757-3071-5_3

    Chapter  MATH  Google Scholar 

  19. Song, S., Funkhouser, T.: Neural illumination: lighting prediction for indoor environments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6918–6926 (2019)

    Google Scholar 

  20. de Tinguy, X., Pacchierotti, C., Lécuyer, A., Marchal, M.: Capacitive sensing for improving contact rendering with tangible objects in VR. IEEE Trans. Vis. Comput. Graph. 27, 2481–2487 (2021)

    Article  Google Scholar 

  21. Zanfir, A., Marinoiu, E., Sminchisescu, C.: Monocular 3D pose and shape estimation of multiple people in natural scenes-the importance of multiple scene constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2148–2157 (2018)

    Google Scholar 

  22. Zhang, E., Cohen, M.F., Curless, B.: Emptying, refurnishing, and relighting indoor spaces. ACM Trans. Graph. (TOG) 35(6), 1–14 (2016)

    Google Scholar 

  23. Zhao, Y., Guo, T.: PointAR: efficient lighting estimation for mobile augmented reality. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 678–693. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_40

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anamitra Mani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Raut, C., Mani, A., Muraleedharan, L.P., Velappan, R. (2022). LiteAR: A Framework to Estimate Lighting for Mixed Reality Sessions for Enhanced Realism. In: Magnenat-Thalmann, N., et al. Advances in Computer Graphics. CGI 2022. Lecture Notes in Computer Science, vol 13443. Springer, Cham. https://doi.org/10.1007/978-3-031-23473-6_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23473-6_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23472-9

  • Online ISBN: 978-3-031-23473-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics