Skip to main content
Log in

PVLI: potentially visible layered image for real-time ray tracing

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Novel view synthesis is frequently employed in video streaming, temporal upsampling, or virtual reality. We propose a new representation, potentially visible layered image (PVLI), that uses a combination of a potentially visible set of the scene geometry and layered color images. PVLI encodes the depth implicitly and enables cheap run-time reconstruction. Furthermore, PVLI can also be used to reconstruct pixel and layer connectivities, which is crucial for filtering and post-processing of the rendered images. We use PVLIs to achieve local and server-based real-time ray tracing. In the first case, PVLIs are used as a basis for temporal and spatial upsampling of ray-traced illumination. In the second case, PVLIs are compressed, streamed over the network, and then used by a thin client to perform temporal and spatial upsampling and to hide latency. To shade the view, we use path tracing, accounting for effects such as soft shadows, global illumination, and physically based refraction. Our method supports dynamic lighting, and up to a limited extent, it also handles view-dependent surface interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://github.com/jaxtren/PVLI.

  2. https://github.com/jbikker/lighthouse2.

  3. https://github.com/SEED-EA/pica-pica-assets.

References

  1. Andersson, P., Nilsson, J., Akenine-Möller, T., Oskarsson, M., Åström, K., Fairchild, M.D.: Flip: a difference evaluator for alternating images. Proc. ACM Comput. Graph. Interact. Technol. 3(2), 1–15 (2020)

    Article  Google Scholar 

  2. Chang, C.F., Bishop, G., Lastra, A.: Ldi tree: a hierarchical representation for image-based rendering. In: Proceedings of SIGGRAPH’99, pp. 291–298 (1999)

  3. Chen, S.E., Williams, L.: View interpolation for image synthesis. In: Proceedings of SIGGRAPH’93, pp. 279–288 (1993)

  4. Cohen-Or, D., Chrysanthou, Y., Silva, C., Durand, F.: A survey of visibility for walkthrough applications. IEEE Trans. Vis. Comput. Graph. 9(3), 412–431 (2003)

    Article  Google Scholar 

  5. Didyk, P., Eisemann, E., Ritschel, T., Myszkowski, K., Seidel, H.P.: Perceptually-motivated real-time temporal upsampling of 3D content for high-refresh-rate displays. Comput. Graph. Forum 29(2), 713–722 (2010)

    Article  Google Scholar 

  6. Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., Seidel, H.P.: Adaptive image-space stereo view synthesis. In: Vision, Modeling and Visualization Workshop, pp. 299–306. Siegen, Germany (2010)

  7. Gribble, C.: Multi-hit ray tracing in DXR. In: Ray Tracing Gems, chap. 9. Apress (2019)

  8. Hladky, J., Seidel, H.P., Steinberger, M.: The camera offset space: real-time potentially visible set computations for streaming rendering. ACM Trans. Graph. 38(6), 66 (2019)

    Article  Google Scholar 

  9. Hladky, J., Seidel, H.P., Steinberger, M.: Tessellated shading streaming. Comput. Graph. Forum 38(4), 171–182 (2019)

    Article  Google Scholar 

  10. Hladky, J., Seidel, H.P., Steinberger, M.: Snakebinning: efficient temporally coherent triangle packing for shading streaming. In: Computer Graphics Forum, vol. 40, pp. 475–488. Wiley Online Library (2021)

  11. Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Temporally coherent completion of dynamic video. ACM Trans. Graph. 35(6), 1–11 (2016)

    Google Scholar 

  12. Işık, M., Mullia, K., Fisher, M., Eisenmann, J., Gharbi, M.: Interactive Monte Carlo denoising using affinity of neural features. ACM Trans. Graph. 40(4), 1–13 (2021)

    Article  Google Scholar 

  13. Koch, T., Wimmer, M.: Guided visibility sampling++. Proc. ACM Comput. Graph. Interact. Technol. 4(1), 66 (2021)

    Article  Google Scholar 

  14. Kondapaneni, I., Vévoda, P., Grittmann, P., Skřivan, T., Slusallek, P., Křivánek, J.: Optimal multiple importance sampling. ACM Trans. Graph. 38(4), 1–14 (2019)

    Article  Google Scholar 

  15. Koskela, M., Immonen, K., Mäkitalo, M., Foi, A., Viitanen, T., Jääskeläinen, P., Kultala, H., Takala, J.: Blockwise multi-order feature regression for real-time path-tracing reconstruction. ACM Trans. Graph. 38(5), 1–14 (2019)

    Article  Google Scholar 

  16. Luebke, D., Reddy, M., Cohen, J.D., Varshney, A., Watson, B., Huebner, R.: Level of Detail for 3D Graphics. Morgan Kaufmann (2003)

  17. Mara, M., McGuire, M., Luebke, D.: Lighting deep g-buffers: single-pass, layered depth images with minimum separation applied to indirect illumination. Tech. rep, NVIDIA (2013)

  18. Mark, W.R., McMillan, L., Bishop, G.: Post-rendering 3d warping. In: Proceedings of the 1997 Symposium on Interactive 3D Graphics, p. 7-ff (1997)

  19. Muddala, S.M., Sjöström, M., Olsson, R.: Virtual view synthesis using layered depth image generation and depth-based inpainting for filling disocclusions and translucent disocclusions. J. Vis. Commun. Image Represent. 38, 351–366 (2016)

    Article  Google Scholar 

  20. Mueller, J.H., Neff, T., Voglreiter, P., Steinberger, M., Schmalstieg, D.: Temporally adaptive shading reuse for real-time rendering and virtual reality. ACM Trans. Graph. 40(2), 66 (2021)

    Article  Google Scholar 

  21. Mueller, J.H., Voglreiter, P., Dokter, M., Neff, T., Makar, M., Steinberger, M., Schmalstieg, D.: Shading atlas streaming. ACM Trans. Graph. 37(6), 66 (2018)

    Article  Google Scholar 

  22. Müller, T., Gross, M., Novák, J.: Practical path guiding for efficient light-transport simulation. In: Computer Graphics Forum, vol. 36, pp. 91–100 (2017)

  23. Noimark, Y., Cohen-Or, D.: Streaming scenes to mpeg-4 video-enabled devices. IEEE Comput. Graph. Appl. 23(1), 58–64 (2003)

    Article  Google Scholar 

  24. OculusVR: Rendering to the oculus rift. https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/ (2020). Accessed 15 Jan 2020

  25. Ouyang, Y., Liu, S., Kettunen, M., Pharr, M., Pantaleoni, J.: Restir gi: path resampling for real-time path tracing. In: Computer Graphics Forum, vol. 40, pp. 17–29 (2021)

  26. Pajak, D., Herzog, R., Eisemann, E., Myszkowski, K., Seidel, H.P.: Scalable remote rendering with depth and motion-flow augmented streaming. In: Computer Graphics Forum, vol. 30, pp. 415–424 (2011)

  27. Park, S.Y., Kim, S.D.: Efficient depth compression based on partial surface for 3-d object represented by layered depth image. IEEE Signal Process. Lett. 17(10), 839–842 (2010)

    Article  Google Scholar 

  28. Reinert, B., Kopf, J., Ritschel, T., Cuervo, E., Chu, D., Seidel, H.P.: Proxy-guided image-based rendering for mobile devices. Comput. Graph. Forum 6, 66 (2016)

    Google Scholar 

  29. Samet, H.: The quadtree and related hierarchical data structures. ACM Comput. Surv. 66, 2 (1984)

    Google Scholar 

  30. Schied, C., Kaplanyan, A., Wyman, C., Patney, A., Chaitanya, C.R.A., Burgess, J., Liu, S., Dachsbacher, C., Lefohn, A., Salvi, M.: Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination. In: Proceedings of HPG (2017)

  31. Schied, C., Peters, C., Dachsbacher, C.: Gradient estimation for real-time adaptive temporal filtering. Proc. ACM Comput. Graph. Interact. Tech. 1(2), 1–16 (2018)

    Article  Google Scholar 

  32. Shade, J., Gortler, S., He, L.w., Szeliski, R.: Layered depth images. In: Proceedings of SIGGRAPH’98, pp. 231–242 (1998)

  33. Shi, S., Hsu, C.H.: A survey of interactive remote rendering systems. ACM Comput. Surv. 47(4), 66 (2015)

    Article  Google Scholar 

  34. Shi, S., Nahrstedt, K., Campbell, R.: A real-time remote rendering system for interactive mobile graphics. ACM Trans. Multimed. Comput. Commun. Appl. 8(3s), 66 (2012)

    Article  Google Scholar 

  35. Strengert, M., Kraus, M., Ertl, T.: Pyramid methods in gpu-based image processing. In: Proceedings Vision, Modeling, and Visualization, vol. 2006, pp. 169–176 (2006)

  36. Tauber, Z., Li, Z.N., Drew, M.S.: Review and preview: disocclusion by inpainting for image-based rendering. IEEE Trans. Syst. Man Cybernet. Part C Appl. Rev. 37(4), 527–540 (2007)

    Article  Google Scholar 

  37. Walter, B., Drettakis, G., Parker, S.: Interactive rendering using the render cache. In: Eurographics Workshop on Rendering Techniques, pp. 19–30. Springer (1999)

  38. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  39. Wonka, P., Wimmer, M., Zhou, K., Maierhofer, S., Hesina, G., Reshetov, A.: Guided visibility sampling. ACM Trans. Graph. 25(3), 494–502 (2006)

    Article  Google Scholar 

  40. Yang, L., Tse, Y.C., Sander, P.V., Lawrence, J., Nehab, D., Hoppe, H., Wilkins, C.L.: Image-based bidirectional scene reprojection. In: Proceedings of the 2011 SIGGRAPH Asia Conference, pp. 1–10 (2011)

  41. Zheng, S., Zheng, F., Xu, K., Yan, L.Q.: Ensemble denoising for Monte Carlo renderings. ACM Trans. Graph. 40(6), 1–17 (2021)

    Article  Google Scholar 

  42. Zitnick, C.L., Kang, S.B., Uyttendaele, M., Winder, S., Szeliski, R.: High-quality video view interpolation using a layered representation. ACM Trans. Graph. 23(3), 600–608 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Czech Science Foundation (GA18-20374S), Research Center for Informatics (CZ.02.1.01/0.0/0.0/16_019/0000765), and by the Grant Agency of the Czech Technical University in Prague, No. SGS22/173/OHK3/3T/13.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Káčerik.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (avi 176078 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kravec, J., Káčerik, M. & Bittner, J. PVLI: potentially visible layered image for real-time ray tracing. Vis Comput 39, 3359–3372 (2023). https://doi.org/10.1007/s00371-023-03007-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03007-5

Keywords

Navigation