Skip to main content

AdaNeRF: Adaptive Sampling for Real-Time Rendering of Neural Radiance Fields

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13677))

Included in the following conference series:

Abstract

Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations. However, rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray. Previous work has mainly focused on speeding up the network evaluations that are associated with each sample point, e.g., via caching of radiance values into explicit spatial data structures, but this comes at the expense of model compactness. In this paper, we propose a novel dual-network architecture that takes an orthogonal direction by learning how to best reduce the number of required sample points. To this end, we split our network into a sampling and shading network that are jointly trained. Our training scheme employs fixed sample positions along each ray, and incrementally introduces sparsity throughout training to achieve high quality even at low sample counts. After fine-tuning with the target number of samples, the resulting compact neural representation can be rendered in real-time. Our experiments demonstrate that our approach outperforms concurrent compact neural representations in terms of quality and frame rate and performs on par with highly efficient hybrid representations. Code and supplementary material is available at https://thomasneff.github.io/adanerf.

A. Kurz and T. Neff—Authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Attal, B., Huang, J., Zollhöfer, M., Kopf, J., Kim, C.: Learning neural light fields with ray-space embedding networks. CoRR abs/2112.01523 (2021). https://arxiv.org/abs/2112.01523

  2. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. ICCV (2021)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields (2021)

    Google Scholar 

  4. Boss, M., Braun, R., Jampani, V., Barron, J.T., Liu, C., Lensch, H.: NeRD: neural reflectance decomposition from image collections (2020). https://arxiv.org/abs/2012.03918

  5. Chen, J., et al.: Animatable neural radiance fields from monocular RGB videos (2021)

    Google Scholar 

  6. Du, Y., Zhang, Y., Yu, H.X., Tenenbaum, J.B., Wu, J.: Neural radiance flow for 4D view synthesis and video processing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)

    Google Scholar 

  7. Fang, J., Xie, L., Wang, X., Zhang, X., Liu, W., Tian, Q.: NeuSample: neural sample field for efficient view synthesis. arXiv:2111.15552 (2021)

  8. Gao, C., Saraf, A., Kopf, J., Huang, J.B.: Dynamic view synthesis from dynamic monocular video. In: Proceedings of the IEEE International Conference on Computer Vision (2021)

    Google Scholar 

  9. Garbin, S.J., Kowalski, M., Johnson, M., Shotton, J., Valentin, J.: FastNeRF: high-fidelity neural rendering at 200FPS. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14346–14355 (2021)

    Google Scholar 

  10. Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis (2021)

    Google Scholar 

  11. Jiakai, Z., et al.: Editable free-viewpoint video using a layered neural representation. In: ACM SIGGRAPH (2021)

    Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)

    Google Scholar 

  13. Li, T., et al.: Neural 3D video synthesis (2021)

    Google Scholar 

  14. Lindell, D.B., Martel, J.N., Wetzstein, G.: AutoInt: automatic integration for fast neural volume rendering (2021)

    Google Scholar 

  15. Liu, L., Gu, J., Zaw Lin, K., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. Adv. Neural. Inf. Process. Syst. 33, 15651–15663 (2020)

    Google Scholar 

  16. Liu, L., Habermann, M., Rudnev, V., Sarkar, K., Gu, J., Theobalt, C.: Neural actor: neural free-view synthesis of human actors with pose control. In: ACM SIGGRAPH Asia (2021)

    Google Scholar 

  17. Lombardi, S., Simon, T., Schwartz, G., Zollhoefer, M., Sheikh, Y., Saragih, J.: Mixture of volumetric primitives for efficient neural rendering (2021)

    Google Scholar 

  18. Martin-Brualla, R., Radwan, N., Sajjadi, M.S.M., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7210–7219 (2021)

    Google Scholar 

  19. Mildenhall, B., et al.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38(4), 1–14 (2019)

    Article  Google Scholar 

  20. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  21. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. arXiv:2201.05989 (2022)

  22. Neff, T., et al.: DONeRF: towards real-time rendering of compact neural radiance fields using depth oracle networks. Comput. Graph. Forum 40(4), 45–59 (2021). https://doi.org/10.1111/cgf.14340

    Article  Google Scholar 

  23. Oechsle, M., Peng, S., Geiger, A.: UNISURF: unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In: International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  24. Park, K., et al.: Nerfies: deformable neural radiance fields. In: ICCV (2021)

    Google Scholar 

  25. Park, K., et al.: HyperNeRF: a higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228 (2021)

  26. Peng, S., et al.: Animatable neural radiance fields for human body modeling. arXiv preprint arXiv:2105.02872 (2021)

  27. Piala, M., Clark, R.: TermiNeRF: ray termination prediction for efficient neural rendering. In: 2021 International Conference on 3D Vision (3DV), pp. 1106–1114. IEEE Computer Society, Los Alamitos, CA, USA (2021). https://doi.org/10.1109/3DV53792.2021.00118

  28. Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-NeRF: neural radiance fields for dynamic scenes (2020). https://arxiv.org/abs/2011.13961

  29. Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-NeRF: neural radiance fields for dynamic scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10318–10327 (2021)

    Google Scholar 

  30. Rebain, D., Jiang, W., Yazdani, S., Li, K., Yi, K.M., Tagliasacchi, A.: DeRF: decomposed radiance fields (2020). https://arxiv.org/abs/2011.12490

  31. Rebain, D., Jiang, W., Yazdani, S., Li, K., Yi, K.M., Tagliasacchi, A.: DeRF: decomposed radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14153–14161 (2021)

    Google Scholar 

  32. Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs (2021)

    Google Scholar 

  33. Sitzmann, V., Rezchikov, S., Freeman, W.T., Tenenbaum, J.B., Durand, F.: Light field networks: neural scene representations with single-evaluation rendering. In: Proc. NeurIPS (2021)

    Google Scholar 

  34. Srinivasan, P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis (2020). https://arxiv.org/abs/2012.03927

  35. Takikawa, T., et al.: Neural geometric level of detail: real-time rendering with implicit 3D shapes (2021)

    Google Scholar 

  36. Tancik, M., et al.: Block-NeRF: scalable large scene neural view synthesis (2022)

    Google Scholar 

  37. Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., Theobalt, C.: Non-rigid neural radiance fields: reconstruction and novel view synthesis of a dynamic scene from monocular video. In: IEEE International Conference on Computer Vision (ICCV). IEEE (2021)

    Google Scholar 

  38. Turki, H., Ramanan, D., Satyanarayanan, M.: Mega-NERF: scalable construction of large-scale NeRFs for virtual fly-throughs (2021)

    Google Scholar 

  39. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. NeurIPS (2021)

    Google Scholar 

  40. Xian, W., Huang, J.B., Kopf, J., Kim, C.: Space-time neural irradiance fields for free-viewpoint video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  41. Xie, C., Park, K., Martin-Brualla, R., Brown, M.: FiG-NeRF: figure-ground neural radiance fields for 3D object category modelling (2021)

    Google Scholar 

  42. Yang, B., et al.: Learning object-compositional neural radiance field for editable scene rendering. In: International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  43. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. NeurIPS (2021)

    Google Scholar 

  44. Yu, A., Fridovich-Keil, S., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks (2021)

    Google Scholar 

  45. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: ICCV (2021)

    Google Scholar 

  46. Zhang, K., Riegler, G., Snavely, N., Koltun, V.: NeRF++: analyzing and improving neural radiance fields (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Kurz .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1848 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kurz, A., Neff, T., Lv, Z., Zollhöfer, M., Steinberger, M. (2022). AdaNeRF: Adaptive Sampling for Real-Time Rendering of Neural Radiance Fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13677. Springer, Cham. https://doi.org/10.1007/978-3-031-19790-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19790-1_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19789-5

  • Online ISBN: 978-3-031-19790-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics