Skip to main content

NeRF Synthesis with Shading Guidance

  • Conference paper
  • First Online:
Computer-Aided Design and Computer Graphics (CADGraphics 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14250))

  • 173 Accesses

Abstract

The emerging Neural Radiance Field (NeRF) shows great potential in representing 3D scenes, which can render photo-realistic images from novel view with only sparse views given. However, utilizing NeRF to reconstruct real-world scenes requires images from different viewpoints, which limits its practical application. This problem can be even more pronounced for large scenes. In this paper, we introduce a new task called NeRF synthesis that utilizes the structural content of a NeRF exemplar to construct a new radiance field of large size. We propose a two-phase method for synthesizing new scenes that are continuous in geometry and appearance. We also propose a boundary constraint method to synthesize scenes of arbitrary size without artifacts. Specifically, the lighting effects of synthesized scenes are controlled using shading guidance instead of decoupling the scene. The proposed method can generate high-quality results with consistent geometry and appearance, even for scenes with complex lighting. It can even synthesize new scenes on curved surface with arbitrary lighting effects, which enhances the practicality of our proposed NeRF synthesis approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We encourage interested readers to refer to their original work for a more comprehensive understanding.

References

  1. 3dcruz: Pebbles model (2017). https://www.turbosquid.com/3d-models/pebbles-model-1222261

  2. Ashikhmin, M.: Synthesizing natural textures. In: Proceedings of the 2001 Symposium on Interactive 3D Graphics, pp. 217–226 (2001)

    Google Scholar 

  3. Chan, E.R., Monteiro, M., Kellnhofer, P., Wu, J., Wetzstein, G.: Pi-GAN: periodic implicit generative adversarial networks for 3D-aware image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5799–5809 (2021)

    Google Scholar 

  4. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20

    Chapter  Google Scholar 

  5. chrisg4919: Cobblestones (2019). https://free3d.com/3d-model/cobblestones-2-41224.html

  6. Das, P., Karaoglu, S., Gevers, T.: PIE-Net: photometric invariant edge guided network for intrinsic image decomposition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19790–19799 (2022)

    Google Scholar 

  7. De Bonet, J.S.: Multiresolution sampling procedure for analysis and synthesis of texture images. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, pp. 361–368 (1997)

    Google Scholar 

  8. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 341–346 (2001)

    Google Scholar 

  9. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1033–1038 (1999)

    Google Scholar 

  10. Gao, K., Gao, Y., He, H., Lu, D., Xu, L., Li, J.: NeRF: neural radiance field in 3D vision, a comprehensive review. arXiv preprint arXiv:2210.00379 (2022)

  11. Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63, 139–144 (2020)

    Article  Google Scholar 

  12. Grace, T.: Old couch 3D (2017). https://www.turbosquid.com/3d-models/couch-old-3d-1171782

  13. Heeger, D.J., Bergen, J.R.: Pyramid-based texture analysis/synthesis. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 229–238 (1995)

    Google Scholar 

  14. Hong, Y., Peng, B., Xiao, H., Liu, L., Zhang, J.: HeadNeRF: a real-time nerf-based parametric head model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20374–20384 (2022)

    Google Scholar 

  15. Huang, Y.H., Cao, Y.P., Lai, Y.K., Shan, Y., Gao, L.: NeRF-texture: texture synthesis with neural radiance fields. In: ACM SIGGRAPH 2023 Conference Proceedings, pp. 1–10 (2023)

    Google Scholar 

  16. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference for Learning Representations (2015)

    Google Scholar 

  17. Kwatra, V., Essa, I., Bobick, A., Kwatra, N.: Texture optimization for example-based synthesis. In: ACM SIGGRAPH 2005 Papers 24, pp. 795–802 (2005)

    Google Scholar 

  18. Kwatra, V., Schödl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures: image and video synthesis using graph cuts. ACM Trans. Graph. (ToG) 22, 277–286 (2003)

    Article  Google Scholar 

  19. Li, W., Chen, X., Wang, J., Chen, B.: Patch-based 3D natural scene generation from a single example. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023)

    Google Scholar 

  20. Liang, L., Liu, C., Xu, Y.Q., Guo, B., Shum, H.Y.: Real-time texture synthesis by patch-based sampling. ACM Trans. Graph. (ToG) 20, 127–150 (2001)

    Article  Google Scholar 

  21. Liu, L., Zhang, L., Xu, Y., Gotsman, C., Gortler, S.J.: A local/global approach to mesh parameterization. In: Computer Graphics Forum, vol. 27, pp. 1495–1504 (2008)

    Google Scholar 

  22. Miaad: carpet 3D model (2019). https://www.cgtrader.com/3d-models/furniture/other/carpet-8b996554-95ea-43a1-adb3-e89ae111d2f3

  23. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 99–106 (2021)

    Article  Google Scholar 

  24. Müller, N., Siddiqui, Y., Porzi, L., Bulò, S.R., Kontschieder, P., Nießner, M.: DiffRF: rendering-guided 3D radiance field diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023)

    Google Scholar 

  25. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41, 1–15 (2022)

    Article  Google Scholar 

  26. Niemeyer, M., Geiger, A.: Giraffe: representing scenes as compositional generative neural feature fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11453–11464 (2021)

    Google Scholar 

  27. Portilla, J., Simoncelli, E.P.: A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40, 49–70 (2000)

    Article  Google Scholar 

  28. Raad, L., Davy, A., Desolneux, A., Morel, J.M.: A survey of exemplar-based texture synthesis. Ann. Math. Sci. Appl. 3, 89–148 (2018)

    Article  MathSciNet  Google Scholar 

  29. Rafi, M., Mukhopadhyay, S.: Image quilting for texture synthesis of grayscale images using gray-level co-occurrence matrix and restricted cross-correlation. In: Pati, B., Panigrahi, C.R., Misra, S., Pujari, A.K., Bakshi, S. (eds.) Progress in Advanced Computing and Intelligent Engineering. AISC, vol. 713, pp. 37–47. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1708-8_4

    Chapter  Google Scholar 

  30. ROY: Green grass (2022). https://sketchfab.com/3d-models/train-wagon-42875c098c33456b84bcfcdc4c7f1c58

  31. Rudnev, V., Elgharib, M., Smith, W., Liu, L., Golyanik, V., Theobalt, C.: Nerf for outdoor scene relighting. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13676, pp. 615–631. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19787-1_35

    Chapter  Google Scholar 

  32. Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016)

    Google Scholar 

  33. Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.: Graf: generative radiance fields for 3D-aware image synthesis. Adv. Neural. Inf. Process. Syst. 33, 20154–20166 (2020)

    Google Scholar 

  34. Shue, J.R., Chan, E.R., Po, R., Ankner, Z., Wu, J., Wetzstein, G.: 3D neural field generation using triplane diffusion. arXiv preprint arXiv:2211.16677 (2022)

  35. Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5459–5469 (2022)

    Google Scholar 

  36. Tancik, M., et al.: Block-NeRF: scalable large scene neural view synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8248–8258 (2022)

    Google Scholar 

  37. Tong, X., Zhang, J., Liu, L., Wang, X., Guo, B., Shum, H.Y.: Synthesis of bidirectional texture functions on arbitrary surfaces. ACM Trans. Graph. (ToG) 21, 665–672 (2002)

    Article  Google Scholar 

  38. Wang, Y., Chen, X., Chen, B.: SingRAV: learning a generative radiance volume from a single natural scene. arXiv preprint arXiv:2210.01202 (2022)

  39. Wei, L.Y., Levoy, M.: Fast texture synthesis using tree-structured vector quantization. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 479–488 (2000)

    Google Scholar 

  40. Wu, Q., Yu, Y.: Feature matching and deformation for texture synthesis. ACM Trans. Graph. (ToG) 23, 364–367 (2004)

    Article  Google Scholar 

  41. Yang, W., Chen, G., Chen, C., Chen, Z., Wong, K.Y.K.: PS-NeRF: neural inverse rendering for multi-view photometric stereo. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision - ECCV 2022. ECCV 2022, vol. 13661, pp. 266–284. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19769-7_16

    Chapter  Google Scholar 

  42. yassin14000: Photoscanned patch of ground (2021). https://sketchfab.com/3d-models/photoscanned-patch-of-ground-c92e083da06d4295849c9f67e84c3664

  43. Yavari, J.: Green lawn free 3D model (2020). https://www.cgtrader.com/free-3d-models/plant/grass/green-lawn-8d4341d7-6281-40e9-8872-d429512a3b3b

  44. Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: Nerfactor: neural factorization of shape and reflectance under an unknown illumination. ACM Trans. Graph. (ToG) 40, 1–18 (2021)

    Article  Google Scholar 

Download references

Acknowledgments

We thank the reviewers for their valuable comments. This work is supported by the National Key R &D Program of China (2022YFB3303400) and the National Natural Science Foundation of China (62025207).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ligang Liu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 22758 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, C., Xin, Y., Liu, G., Zeng, X., Liu, L. (2024). NeRF Synthesis with Shading Guidance. In: Hu, SM., Cai, Y., Rosin, P. (eds) Computer-Aided Design and Computer Graphics. CADGraphics 2023. Lecture Notes in Computer Science, vol 14250. Springer, Singapore. https://doi.org/10.1007/978-981-99-9666-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9666-7_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9665-0

  • Online ISBN: 978-981-99-9666-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics