Advertisement

Invertible Neural BRDF for Object Inverse Rendering

Conference paper
  • 769 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)

Abstract

We introduce a novel neural network-based BRDF model and a Bayesian framework for object inverse rendering, i.e., joint estimation of reflectance and natural illumination from a single image of an object of known geometry. The BRDF is expressed with an invertible neural network, namely, normalizing flow, which provides the expressive power of a high-dimensional representation, computational simplicity of a compact analytical model, and physical plausibility of a real-world BRDF. We extract the latent space of real-world reflectance by conditioning this model, which directly results in a strong reflectance prior. We refer to this model as the invertible neural BRDF model (iBRDF). We also devise a deep illumination prior by leveraging the structural bias of deep neural networks. By integrating this novel BRDF model and reflectance and illumination priors in a MAP estimation formulation, we show that this joint estimation can be computed efficiently with stochastic gradient descent. We experimentally validate the accuracy of the invertible neural BRDF model on a large number of measured data and demonstrate its use in object inverse rendering on a number of synthetic and real images. The results show new ways in which deep neural networks can help solve challenging radiometric inverse problems.

Keywords

Reflectance BRDF Inverse rendering Illumination estimation 

Notes

Acknowledgement

This work was in part supported by JSPS KAKENHI 17K20143 and a donation by HiSilicon.

Supplementary material

504441_1_En_45_MOESM1_ESM.pdf (783 kb)
Supplementary material 1 (pdf 783 KB)

References

  1. 1.
    Ardizzone, L., et al.: Analyzing inverse problems with invertible neural networks. In: International Conference on Learning Representations (2019)Google Scholar
  2. 2.
    Ashikhmin, M., Premoze, S.: Distribution-based BRDFs. Unpublished Technical report, University of Utah 2, 6 (2007)Google Scholar
  3. 3.
    Azinović, D., Li, T.M., Kaplanyan, A., Nießner, M.: Inverse path tracing for joint material and lighting estimation. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2019)Google Scholar
  4. 4.
    Basri, R., Jacobs, D.: Photometric stereo with general, unknown lighting. In: IEEE International Conference on Computer Vision and Pattern Recognition, pp. 374–381 (2001)Google Scholar
  5. 5.
    Blinn, J.F.: Models of light reflection for computer synthesized pictures. In: Proceedings of the 4th Annual Conference on Computer Graphics and Interactive Techniques, pp. 192–198 (1977)Google Scholar
  6. 6.
    Bojanowski, P., Joulin, A., Lopez-Paz, D., Szlam, A.: Optimizing the latent space of generative networks. In: International Conference on Machine Learning (2018)Google Scholar
  7. 7.
    Burley, B., Studios, W.D.A.: Physically-based shading at disney. In: ACM SIGGRAPH 2012, pp. 1–7 (2012)Google Scholar
  8. 8.
    Cook, R.L., Torrance, K.E.: A reflectance model for computer graphics. ACM Trans. Graph. (TOG) 1(1), 7–24 (1982)CrossRefGoogle Scholar
  9. 9.
    Debevec, P.: Light probe gallary. http://www.debevec.org/Probes/
  10. 10.
    Debevec, P.: Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: ACM SIGGRAPH 2008 Classes, SIGGRAPH 2008, pp. 32:1–32:10. ACM, New York (2008).  https://doi.org/10.1145/1401132.1401175
  11. 11.
    Dinh, L., Krueger, D., Bengio, Y.: NICE: non-linear independent components estimation. CoRR (2014)Google Scholar
  12. 12.
    Gao, D., Li, X., Dong, Y., Peers, P., Xu, K., Tong, X.: Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Trans. Graph. (TOG) 38(4), 134 (2019)CrossRefGoogle Scholar
  13. 13.
    Gardner, M.A., Hold-Geoffroy, Y., Sunkavalli, K., Gagne, C., Lalonde, J.F.: Deep parametric indoor lighting estimation. In: The IEEE International Conference on Computer Vision (ICCV), October 2019Google Scholar
  14. 14.
    Garon, M., Sunkavalli, K., Hadap, S., Carr, N., Lalonde, J.F.: Fast spatially-varying indoor lighting estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019Google Scholar
  15. 15.
    Georgoulis, S., et al.: Reflectance and natural illumination from single-material specular objects using deep learning. IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 1932–1947 (2017)CrossRefGoogle Scholar
  16. 16.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)Google Scholar
  17. 17.
    Kang, K., Chen, Z., Wang, J., Zhou, K., Wu, H.: Efficient reflectance capture using an autoencoder. ACM Trans. Graph. 37(4), 127-1 (2018)CrossRefGoogle Scholar
  18. 18.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: International Conference on Learning Representations (2014)Google Scholar
  19. 19.
    Koenderink, J.J., van Doorn, A.J., Stavridi, M.: Bidirectional reflection distribution function expressed in terms of surface scattering modes. In: Buxton, B., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1065, pp. 28–39. Springer, Heidelberg (1996).  https://doi.org/10.1007/3-540-61123-1_125CrossRefGoogle Scholar
  20. 20.
    Lee, H.C.: Illuminant color from shading, pp. 340–347. Jones and Bartlett Publishers Inc., USA (1992)Google Scholar
  21. 21.
    LeGendre, C., et al.: DeepLight: learning illumination for unconstrained mobile mixed reality. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5918–5928 (2019)Google Scholar
  22. 22.
    Lombardi, S., Nishino, K.: Reflectance and natural illumination from a single image. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 582–595. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33783-3_42CrossRefGoogle Scholar
  23. 23.
    Lombardi, S., Nishino, K.: Single image multimaterial estimation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 238–245. IEEE (2012)Google Scholar
  24. 24.
    Lombardi, S., Nishino, K.: Radiometric scene decomposition: scene reflectance, illumination, and geometry from RGB-D images. In: International Conference on 3D Vision (2016)Google Scholar
  25. 25.
    Lombardi, S., Nishino, K.: Reflectance and illumination recovery in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 129–141 (2016)CrossRefGoogle Scholar
  26. 26.
    Marschner, S.R., Greenberg, D.P.: Inverse lighting for photography. In: Color and Imaging Conference, pp. 262–265. Society for Imaging Science and Technology (1997)Google Scholar
  27. 27.
    Matusik, W.: A data-driven reflectance model. Ph.D. thesis, Massachusetts Institute of Technology (2003)Google Scholar
  28. 28.
    Meka, A., et al.: LIME: live intrinsic material estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6315–6324 (2018)Google Scholar
  29. 29.
    Müller, T., Mcwilliams, B., Rousselle, F., Gross, M., Novák, J.: Neural importance sampling. ACM Trans. Graph. 38(5), 145:1–145:19 (2019). http://doi.acm.org/10.1145/3341156CrossRefGoogle Scholar
  30. 30.
    Nicodemus, F., Richmond, J., Hsia, J., Ginsberg, I., Limperis, T.: Geometric considerations and nomenclature for reflectance. National Bureau of Standards (US) (1977)Google Scholar
  31. 31.
    Nishino, K.: Directional statistics BRDF model. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 476–483. IEEE (2009)Google Scholar
  32. 32.
    Nishino, K., Lombardi, S.: Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions. OSA J. Opt. Soc. Am. A 28(1), 8–18 (2011)CrossRefGoogle Scholar
  33. 33.
    Oxholm, G., Nishino, K.: Shape and reflectance estimation in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 376–389 (2016)CrossRefGoogle Scholar
  34. 34.
    Phong, B.T.: Illumination for computer generated pictures. Commun. ACM 18(6), 311–317 (1975)CrossRefGoogle Scholar
  35. 35.
    Ramamoorthi, R., Hanrahan, P.: A signal-processing framework for inverse rendering. In: Computer Graphics Proceedings, ACM SIGGRAPH 2001, pp. 117–128 (2001)Google Scholar
  36. 36.
    Rematas, K., Ritschel, T., Fritz, M., Gavves, E., Tuytelaars, T.: Deep reflectance maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4508–4516 (2016)Google Scholar
  37. 37.
    Romeiro, F., Vasilyev, Y., Zickler, T.: Passive reflectometry. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 859–872. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88693-8_63CrossRefGoogle Scholar
  38. 38.
    Romeiro, F., Zickler, T.: Blind reflectometry. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 45–58. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_4CrossRefGoogle Scholar
  39. 39.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  40. 40.
    Rusinkiewicz, S.M.: A new change of variables for efficient BRDF representation. In: Drettakis, G., Max, N. (eds.) EGSR 1998. E, pp. 11–22. Springer, Vienna (1998).  https://doi.org/10.1007/978-3-7091-6453-2_2CrossRefGoogle Scholar
  41. 41.
    Sato, Y., Wheeler, M.D., Ikeuchi, K.: Object shape and reflectance modeling from observation. In: Proceedings of SIGGRAPH 1997, pp. 379–387 (1997)Google Scholar
  42. 42.
    Tabak, E.G., Turner, C.V.: A family of nonparametric density estimation algorithms. Commun. Pure Appl. Math. 66(2), 145–164 (2013).  https://doi.org/10.1002/cpa.21423. https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.21423MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Tabak, E.G., Vanden-Eijnden, E.: Density estimation by dual ascent of the log-likelihood. Commun. Math. Sci. 8(1), 217–233 (2010)MathSciNetCrossRefGoogle Scholar
  44. 44.
    Torrance, K.E., Sparrow, E.M.: Theory for off-specular reflection from roughened surfaces. Josa 57(9), 1105–1114 (1967)CrossRefGoogle Scholar
  45. 45.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454 (2018)Google Scholar
  46. 46.
    Wang, T., Ritschel, T., Mitra, N.: Joint material and illumination estimation from photo sets in the wild. In: 2018 International Conference on 3D Vision (3DV), pp. 22–31. IEEE (2018)Google Scholar
  47. 47.
    Wilcox, R.R.: Introduction to Robust Estimation and Hypothesis Testing. Academic Press, Cambridge (2011)zbMATHGoogle Scholar
  48. 48.
    Yu, Y., Smith, W.A.: InverseRenderNet: learning single image inverse rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Kyoto UniversityKyotoJapan

Personalised recommendations