Skip to main content

Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty Quantification

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13663))

Included in the following conference series:

Abstract

A critical limitation of current methods based on Neural Radiance Fields (NeRF) is that they are unable to quantify the uncertainty associated with the learned appearance and geometry of the scene. This information is paramount in real applications such as medical diagnosis or autonomous driving where, to reduce potentially catastrophic failures, the confidence on the model outputs must be included into the decision-making process. In this context, we introduce Conditional-Flow NeRF (CF-NeRF), a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches. For this purpose, our method learns a distribution over all possible radiance fields modelling the scene which is used to quantify the uncertainty associated with the modelled scene. In contrast to previous approaches enforcing strong constraints over the radiance field distribution, CF-NeRF learns it in a flexible and fully data-driven manner by coupling Latent Variable Modelling and Conditional Normalizing Flows. This strategy allows to obtain reliable uncertainty estimation while preserving model expressivity. Compared to previous state-of-the-art methods proposed for uncertainty quantification in NeRF, our experiments show that the proposed method achieves significantly lower prediction errors and more reliable uncertainty values for synthetic novel view and depth-map estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/bmild/nerf.

References

  1. Aralikatti, R., Margam, D., Sharma, T., Thanda, A., Venkatesan, S.: Global SNR estimation of speech signals using entropy and uncertainty estimates from dropout networks. In: INTERSPEECH (2018)

    Google Scholar 

  2. Ashukha, A., Lyzhov, A., Molchanov, D., Vetrov, D.P.: Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. In: ICLR (2020)

    Google Scholar 

  3. Bae, G., Budvytis, I., Cipolla, R.: Estimating and exploiting the aleatoric uncertainty in surface normal estimation. In: ICCV (2021)

    Google Scholar 

  4. van den Berg, R., Hasenclever, L., Tomczak, J., Welling, M.: Sylvester normalizing flows for variational inference. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) (2018)

    Google Scholar 

  5. Bi, S., et al.: Deep reflectance volumes: relightable reconstructions from multi-view photometric images. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 294–311. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_18

    Chapter  Google Scholar 

  6. Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural networks. In: ICML (2015)

    Google Scholar 

  7. Brachmann, E., Michel, F., Krull, A., Yang, M.Y., Gumhold, S., Rother, C.: Uncertainty-driven 6D pose estimation of objects and scenes from a single RGB image. In: CVPR (2016)

    Google Scholar 

  8. Djuric, N., et al.: Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving. In: WACV (2020)

    Google Scholar 

  9. Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: NIPS (2014)

    Google Scholar 

  10. Gafni, G., Thies, J., Zollhöfer, M., Nießner, M.: Dynamic neural radiance fields for monocular 4D facial avatar reconstruction. In: CVPR (2021)

    Google Scholar 

  11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016)

    Google Scholar 

  12. Geifman, Y., Uziel, G., El-Yaniv, R.: Bias-reduced uncertainty estimation for deep neural classifiers. In: ICLR (2019)

    Google Scholar 

  13. Hendrycks, D., Mazeika, M., Kadavath, S., Song, D.X.: Using self-supervised learning can improve model robustness and uncertainty. In: NeurIPS (2019)

    Google Scholar 

  14. Hernández, S., Vergara, D., Valdenegro-Toro, M., Jorquera, F.: Improving predictive uncertainty estimation using Dropout-Hamiltonian Monte Carlo. Soft. Comput. 24, 4307–4322 (2020)

    Article  Google Scholar 

  15. Jiakai, Z., et al.: Editable free-viewpoint video using a layered neural representation. In: ACM SIGGRAPH (2021)

    Google Scholar 

  16. Kajiya, J., von Herzen, B.: Ray tracing volume densities. ACM SIGGRAPH Comput. Graph. 18, 165–174 (1984)

    Article  Google Scholar 

  17. Kirsch, W.: An elementary proof of de Finetti’s theorem. Stat. Probab. Lett. 151(C), 84–88 (2019)

    Google Scholar 

  18. Klokov, R., Boyer, E., Verbeek, J.: Discrete point flow networks for efficient point cloud generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 694–710. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_41

    Chapter  Google Scholar 

  19. Kosiorek, A.R., et al.: NERF-VAE: a geometry aware 3D scene generative model. In: ICML (2021)

    Google Scholar 

  20. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  21. Li, Y., Li, S., Sitzmann, V., Agrawal, P., Torralba, A.: 3D neural scene representations for visuomotor control. In: CoRL (2021)

    Google Scholar 

  22. Li, Z., Niklaus, S., Snavely, N., Wang, O.: Neural scene flow fields for space-time view synthesis of dynamic scenes. In: CVPR (2021)

    Google Scholar 

  23. Liu, J., Paisley, J.W., Kioumourtzoglou, M.A., Coull, B.: Accurate uncertainty estimation and decomposition in ensemble learning. In: NeurIPS (2019)

    Google Scholar 

  24. Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: NeurIPS (2020)

    Google Scholar 

  25. Liu, S., Zhang, X., Zhang, Z., Zhang, R., Zhu, J.Y., Russell, B.: Editing conditional radiance fields. In: ICCV (2021)

    Google Scholar 

  26. Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. 38(4), 65:1–65:14 (2019)

    Google Scholar 

  27. Long, K., Qian, C., Cortes, J., Atanasov, N.: Learning barrier functions with memory for robust safe navigation. IEEE Robot. Autom. Lett. 6(3), 1 (2021)

    Article  Google Scholar 

  28. Loquercio, A., Segu, M., Scaramuzza, D.: A general framework for uncertainty estimation in deep learning. IEEE Robot. Autom. Lett. 5(2), 3153–3160 (2020)

    Article  Google Scholar 

  29. Malinin, A., Gales, M.J.F.: Predictive uncertainty estimation via prior networks. In: NeurIPS (2018)

    Google Scholar 

  30. Martin-Brualla, R., Radwan, N., Sajjadi, M.S.M., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: CVPR (2021)

    Google Scholar 

  31. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: CVPR (2019)

    Google Scholar 

  32. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  33. Neal, R.M.: Bayesian Learning for Neural Networks. Springer, New York (1995)

    Google Scholar 

  34. Neapolitan, R.E.: Learning Bayesian networks. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (2007)

    Google Scholar 

  35. Neff, T., et al.: DONeRF: towards real-time rendering of compact neural radiance fields using depth oracle networks. Comput. Graph. Forum 40(4), 45–59 (2021)

    Article  Google Scholar 

  36. Niemeyer, M., Barron, J.T., Mildenhall, B., Sajjadi, M.S.M., Geiger, A., Radwan, N.: RegNeRF: regularizing neural radiance fields for view synthesis from sparse inputs. arXiv abs/2112.00724 (2021)

    Google Scholar 

  37. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Occupancy flow: 4D reconstruction by learning particle dynamics. In: ICCV (2019)

    Google Scholar 

  38. Park, J.J., Florence, P., Straub, J., Newcombe, R.A., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: CVPR (2019)

    Google Scholar 

  39. Park, K., et al.: Nerfies: deformable neural radiance fields. In: ICCV (2021)

    Google Scholar 

  40. Peng, L.W., Shamsuddin, S.M.: 3D object reconstruction and representation using neural networks. In: Proceedings of the 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia (2004)

    Google Scholar 

  41. Peng, S., et al.: Animatable neural radiance fields for human body modeling. In: CVPR (2021)

    Google Scholar 

  42. Poggi, M., Aleotti, F., Tosi, F., Mattoccia, S.: On the uncertainty of self-supervised monocular depth estimation. In: CVPR (2020)

    Google Scholar 

  43. Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-NeRF: neural radiance fields for dynamic scenes. In: CVPR (2021)

    Google Scholar 

  44. Pumarola, A., Popov, S., Moreno-Noguer, F., Ferrari, V.: C-flow: conditional generative flow models for images and 3D point clouds. In: CVPR (2020)

    Google Scholar 

  45. Qu, C., Liu, W., Taylor, C.J.: Bayesian deep basis fitting for depth completion with uncertainty. In: ICCV (2021)

    Google Scholar 

  46. Raj, A., et al.: PVA: pixel-aligned volumetric avatars. In: CVPR (2021)

    Google Scholar 

  47. Rebain, D., Jiang, W., Yazdani, S., Li, K., Yi, K.M., Tagliasacchi, A.: DeRF: decomposed radiance fields. In: CVPR (2020)

    Google Scholar 

  48. Ritter, H., Botev, A., Barber, D.: A scalable laplace approximation for neural networks. In: ICLR (2018)

    Google Scholar 

  49. Saito, S., Yang, J., Ma, Q., Black, M.J.: SCANimate: weakly supervised learning of skinned clothed avatar networks. In: CVPR (2021)

    Google Scholar 

  50. Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.: GRAF: generative radiance fields for 3D-aware image synthesis. In: NIPS (2020)

    Google Scholar 

  51. Shamsi, A., et al.: An uncertainty-aware transfer learning-based framework for Covid-19 diagnosis. IEEE Trans. Neural Netw. Learn. Syst. 32, 1408–1417 (2021)

    Article  Google Scholar 

  52. Shen, J., Ruiz, A., Agudo, A., Moreno-Noguer, F.: Stochastic neural radiance fields: quantifying uncertainty in implicit 3D representations. In: 3DV (2021)

    Google Scholar 

  53. Sitzmann, V., Martel, J.N.P., Bergman, A.W., Lindell, D.B., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: NIPS (2020)

    Google Scholar 

  54. Sitzmann, V., Zollhöfer, M., Wetzstein, G.: Scene representation networks: continuous 3D-structure-aware neural scene representations. In: NIPS (2019)

    Google Scholar 

  55. Su, S.Y., Yu, F., Zollhöfer, M., Rhodin, H.: A-NeRF: articulated neural radiance fields for learning human shape, appearance, and pose. In: NIPS (2021)

    Google Scholar 

  56. Tancik, M., et al.: Learned initializations for optimizing coordinate-based neural representations. In: CVPR (2021)

    Google Scholar 

  57. Tancik, M., et al.: Fourier features let networks learn high frequency functions in low dimensional domains. In: NIPS (2020)

    Google Scholar 

  58. Teye, M., Azizpour, H., Smith, K.: Bayesian uncertainty estimation for batch normalized deep networks. In: ICML (2018)

    Google Scholar 

  59. Wang, Z., Simoncelli, E., Bovik, A.: Multiscale structural similarity for image quality assessment. In: The Thirty-Seventh Asilomar Conference on Signals, Systems Computers, vol. 2, pp. 1398–1402 (2003)

    Google Scholar 

  60. Wei, Y., Liu, S., Rao, Y., Zhao, W., Lu, J., Zhou, J.: NerfingMVS: guided optimization of neural radiance fields for indoor multi-view stereo. In: ICCV (2021)

    Google Scholar 

  61. Xie, Y., et al.: Neural fields in visual computing and beyond (2021)

    Google Scholar 

  62. Xu, H., et al.: Digging into uncertainty in self-supervised multi-view stereo. In: ICCV (2021)

    Google Scholar 

  63. Yang, G., Huang, X., Hao, Z., Liu, M.Y., Belongie, S.J., Hariharan, B.: PointFlow: 3D point cloud generation with continuous normalizing flows. In: ICCV (2019)

    Google Scholar 

  64. Yen-Chen, L., Florence, P., Barron, J.T., Rodriguez, A., Isola, P., Lin, T.Y.: iNeRF: inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021)

    Google Scholar 

  65. Yu, A., Ye, V., Tancik, M., Kanazawa, A.: pixelNeRF: neural radiance fields from one or few images. In: CVPR (2021)

    Google Scholar 

  66. Yücer, K., Sorkine-Hornung, A., Wang, O., Sorkine-Hornung, O.: Efficient 3D object segmentation from densely sampled light fields with applications to 3D reconstruction. ACM Trans. Graph. 35(3) (2016)

    Google Scholar 

  67. Zhang, K., Riegler, G., Snavely, N., Koltun, V.: NeRF++: analyzing and improving neural radiance fields. arXiv abs/2010.07492 (2020)

    Google Scholar 

  68. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)

    Google Scholar 

  69. Zhu, L., et al.: RGB-D local implicit function for depth completion of transparent objects. In: CVPR (2021)

    Google Scholar 

Download references

Acknowledgements

This work is supported partly by the Chinese Scholarship Council (CSC) under grant (201906120031), by the Spanish government under project MoHuCo PID2020-120049RB-I00 and the Chistera project IPALM PCI2019-103386. We also thank Nvidia for hardware donation under the GPU grant program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianxiong Shen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 9221 KB)

Supplementary material 2 (mp4 1120 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shen, J., Agudo, A., Moreno-Noguer, F., Ruiz, A. (2022). Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty Quantification. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13663. Springer, Cham. https://doi.org/10.1007/978-3-031-20062-5_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20062-5_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20061-8

  • Online ISBN: 978-3-031-20062-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics