Advertisement

Deep Reflectance Volumes: Relightable Reconstructions from Multi-view Photometric Images

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12348)

Abstract

We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids. We present a novel physically-based differentiable volume ray marching framework to render these scene volumes under arbitrary viewpoint and lighting. This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images. Our method is able to reconstruct real scenes with challenging non-Lambertian reflectance and complex geometry with occlusions and shadowing. Moreover, it accurately generalizes to novel viewpoints and lighting, including non-collocated lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods. We also show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.

Keywords

View synthesis Relighting Appearance acquisition Neural rendering 

Notes

Acknowledgements

We thank Giljoo Nam for help with the comparisons. This work was supported in part by ONR grants N000141712687, N000141912293, N000142012529, NSF grant 1617234, Adobe, the Ronald L. Graham Chair and the UC San Diego Center for Visual Computing.

Supplementary material

504435_1_En_18_MOESM1_ESM.pdf (3.8 mb)
Supplementary material 1 (pdf 3866 KB)

Supplementary material 2 (mp4 67788 KB)

References

  1. 1.
    Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning representations and generative models for 3D point clouds. In: ICML, pp. 40–49 (2018)Google Scholar
  2. 2.
    Aittala, M., Aila, T., Lehtinen, J.: Reflectance modeling by neural texture synthesis. ACM Trans. Graph. 35(4), 65:1–65:13 (2016)CrossRefGoogle Scholar
  3. 3.
    Aittala, M., Weyrich, T., Lehtinen, J.: Two-shot SVBRDF capture for stationary materials. ACM Trans. Graph. 34(4), 110:1–110:13 (2015)CrossRefGoogle Scholar
  4. 4.
    Alldrin, N., Zickler, T., Kriegman, D.: Photometric stereo with non-parametric and spatially-varying reflectance. In: CVPR, pp. 1–8. IEEE (2008)Google Scholar
  5. 5.
    Baek, S.H., Jeon, D.S., Tong, X., Kim, M.H.: Simultaneous acquisition of polarimetric SVBRDF and normals. ACM Trans. Graph. 37(6), 268-1 (2018)Google Scholar
  6. 6.
    Bi, S., Kalantari, N.K., Ramamoorthi, R.: Patch-based optimization for image-based texture mapping. ACM Trans. Graph. 36(4), 106-1 (2017)Google Scholar
  7. 7.
    Bi, S., Xu, Z., Sunkavalli, K., Kriegman, D., Ramamoorthi, R.: Deep 3D capture: geometry and reflectance from sparse multi-view images. In: CVPR, pp. 5960–5969 (2020)Google Scholar
  8. 8.
    Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M.: Unstructured lumigraph rendering. In: SIGGRAPH, pp. 425–432. ACM (2001)Google Scholar
  9. 9.
    Chen, Z., Chen, A., Zhang, G., Wang, C., Ji, Y., Kutulakos, K.N., Yu, J.: A neural rendering framework for free-viewpoint relighting. In: CVPR, June 2020Google Scholar
  10. 10.
    Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. arXiv preprint arXiv:1812.02822 (2018)
  11. 11.
    Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 145–156. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  12. 12.
    Foo, S.C.: A gonioreflectometer for measuring the bidirectional reflectance of material for use in illumination computation. Ph.D. thesis, Citeseer (1997)Google Scholar
  13. 13.
    Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2009)CrossRefGoogle Scholar
  14. 14.
    Goldman, D.B., Curless, B., Hertzmann, A., Seitz, S.M.: Shape and spatially-varying BRDFs from photometric stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1060–1071 (2009)CrossRefGoogle Scholar
  15. 15.
    Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3D surface generation. In: CVPR, pp. 216–224 (2018)Google Scholar
  16. 16.
    Huang, P.H., Matzen, K., Kopf, J., Ahuja, N., Huang, J.B.: DeepMVS: learning multi-view stereopsis. In: CVPR, pp. 2821–2830 (2018)Google Scholar
  17. 17.
    Hui, Z., Sunkavalli, K., Lee, J.Y., Hadap, S., Wang, J., Sankaranarayanan, A.C.: Reflectance capture using univariate sampling of BRDFs. In: ICCV, pp. 5362–5370 (2017)Google Scholar
  18. 18.
    Ji, M., Gall, J., Zheng, H., Liu, Y., Fang, L.: SurfaceNet: an end-to-end 3D neural network for multiview stereopsis. In: ICCV, pp. 2307–2315 (2017)Google Scholar
  19. 19.
    Kanamori, Y., Endo, Y.: Relighting humans: occlusion-aware inverse rendering for full-body human images. ACM Trans. Graph. 37(6), 1–11 (2018)CrossRefGoogle Scholar
  20. 20.
    Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 386–402. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_23CrossRefGoogle Scholar
  21. 21.
    Kang, K., et al.: Learning efficient illumination multiplexing for joint capture of reflectance and shape (2019)Google Scholar
  22. 22.
    Karis, B., Games, E.: Real shading in unreal engine 4 (2013) Google Scholar
  23. 23.
    Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing, vol. 7 (2006)Google Scholar
  24. 24.
    Kniss, J., Premoze, S., Hansen, C., Shirley, P., McPherson, A.: A model for volume lighting and modeling. IEEE Trans. Vis. Comput. Graph. 9(2), 150–162 (2003)CrossRefGoogle Scholar
  25. 25.
    Kutulakos, K.N., Seitz, S.M.: A theory of shape by space carving. ICCV 38(3), 199–218 (2000).  https://doi.org/10.1023/A:1008191222954CrossRefzbMATHGoogle Scholar
  26. 26.
    Ladicky, L., Saurer, O., Jeong, S., Maninchedda, F., Pollefeys, M.: From point clouds to mesh using regression. In: ICCV, pp. 3893–3902 (2017)Google Scholar
  27. 27.
    Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42. ACM (1996)Google Scholar
  28. 28.
    Li, Z., Sunkavalli, K., Chandraker, M.: Materials for masses: SVBRDF acquisition with a single mobile phone image. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 74–90. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01219-9_5 CrossRefGoogle Scholar
  29. 29.
    Li, Z., Xu, Z., Ramamoorthi, R., Sunkavalli, K., Chandraker, M.: Learning to reconstruct shape and spatially-varying reflectance from a single image. In: SIGGRAPH Asia 2018, p. 269. ACM (2018)Google Scholar
  30. 30.
    Liao, Y., Donne, S., Geiger, A.: Deep marching cubes: learning explicit surface representations. In: CVPR, pp. 2916–2925 (2018)Google Scholar
  31. 31.
    Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. 38(4), 65 (2019)CrossRefGoogle Scholar
  32. 32.
    Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput. Graph. 21(4), 163–169 (1987)CrossRefGoogle Scholar
  33. 33.
    Matusik, W., Pfister, H., Brand, M., McMillan, L.: A data-driven reflectance model. ACM Trans. Graph. 22(3), 759–769 (2003)CrossRefGoogle Scholar
  34. 34.
    Max, N.: Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph. 1(2), 99–108 (1995)CrossRefGoogle Scholar
  35. 35.
    Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. arXiv preprint arXiv:1812.03828 (2018)
  36. 36.
    Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis (2020)Google Scholar
  37. 37.
    Nam, G., Lee, J.H., Gutierrez, D., Kim, M.H.: Practical SVBRDF acquisition of 3D objects with unstructured flash photography. In: SIGGRAPH Asia 2018, p. 267. ACM (2018)Google Scholar
  38. 38.
    Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, pp. 127–136. IEEE Computer Society, Washington, DC, USA (2011)Google Scholar
  39. 39.
    Nielsen, J.B., Jensen, H.W., Ramamoorthi, R.: On optimal, minimal BRDF sampling for reflectance acquisition. ACM Trans. Graph. 34(6), 1–11 (2015)CrossRefGoogle Scholar
  40. 40.
    Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: learning implicit 3D representations without 3D supervision. In: CVPR, pp. 3504–3515 (2020)Google Scholar
  41. 41.
    Novák, J., Georgiev, I., Hanika, J., Jarosz, W.: Monte Carlo methods for volumetric light transport simulation. In: Computer Graphics Forum, vol. 37, pp. 551–576. Wiley Online Library (2018)Google Scholar
  42. 42.
    Paschalidou, D., Ulusoy, O., Schmitt, C., Van Gool, L., Geiger, A.: RayNet: learning volumetric 3D reconstruction with ray potentials. In: CVPR, pp. 3897–3906 (2018)Google Scholar
  43. 43.
    Peers, P., et al.: Compressive light transport sensing. ACM Trans. Graph. 28(1), 3 (2009)CrossRefGoogle Scholar
  44. 44.
    Philip, J., Gharbi, M., Zhou, T., Efros, A.A., Drettakis, G.: Multi-view relighting using a geometry-aware network. ACM Trans. Graph. 38(4), 1–14 (2019)CrossRefGoogle Scholar
  45. 45.
    Richter, S.R., Roth, S.: Matryoshka networks: predicting 3D geometry via nested shape layers. In: CVPR, pp. 1936–1944 (2018)Google Scholar
  46. 46.
    Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)Google Scholar
  47. 47.
    Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_31CrossRefGoogle Scholar
  48. 48.
    Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollhofer, M.: DeepVoxels: learning persistent 3D feature embeddings. In: CVPR, pp. 2437–2446 (2019)Google Scholar
  49. 49.
    Srinivasan, P.P., Tucker, R., Barron, J.T., Ramamoorthi, R., Ng, R., Snavely, N.: Pushing the boundaries of view extrapolation with multiplane images. In: CVPR, pp. 175–184 (2019)Google Scholar
  50. 50.
    Sun, T., et al.: Single image portrait relighting. ACM Trans. Graph. (Proceedings SIGGRAPH) (2019)Google Scholar
  51. 51.
    Wang, J., Sun, B., Lu, Y.: MVPNet: multi-view point regression networks for 3D object reconstruction from a single image. arXiv preprint arXiv:1811.09410 (2018)
  52. 52.
    Wittenbrink, C.M., Malzbender, T., Goss, M.E.: Opacity-weighted color interpolation, for volume sampling. In: Proceedings of the 1998 IEEE Symposium on Volume Visualization, pp. 135–142 (1998)Google Scholar
  53. 53.
    Wu, H., Wang, Z., Zhou, K.: Simultaneous localization and appearance estimation with a consumer RGB-D camera. IEEE Trans. Vis. Comput. Graph. 22(8), 2012–2023 (2015)CrossRefGoogle Scholar
  54. 54.
    Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: CVPR, pp. 1912–1920 (2015)Google Scholar
  55. 55.
    Xia, R., Dong, Y., Peers, P., Tong, X.: Recovering shape and spatially-varying surface reflectance under unknown illumination. ACM Trans. Graph. 35(6), 187 (2016)CrossRefGoogle Scholar
  56. 56.
    Xu, Z., Bi, S., Sunkavalli, K., Hadap, S., Su, H., Ramamoorthi, R.: Deep view synthesis from sparse photometric images. ACM Trans. Graph. 38(4), 76 (2019)Google Scholar
  57. 57.
    Xu, Z., Nielsen, J.B., Yu, J., Jensen, H.W., Ramamoorthi, R.: Minimal BRDF sampling for two-shot near-field reflectance acquisition. ACM Trans. Graph. 35(6), 188 (2016)Google Scholar
  58. 58.
    Xu, Z., Sunkavalli, K., Hadap, S., Ramamoorthi, R.: Deep image-based relighting from optimal sparse samples. ACM Trans. Graph. 37(4), 126 (2018)CrossRefGoogle Scholar
  59. 59.
    Yao, Y., Luo, Z., Li, S., Fang, T., Quan, L.: MVSNet: depth inference for unstructured multi-view stereo. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 785–801. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01237-3_47CrossRefGoogle Scholar
  60. 60.
    Zhou, H., Hadap, S., Sunkavalli, K., Jacobs, D.W.: Deep single-image portrait relighting. In: ICCV, pp. 7194–7202 (2019)Google Scholar
  61. 61.
    Zhou, Q.Y., Koltun, V.: Color map optimization for 3D reconstruction with consumer depth cameras. ACM Trans. Graph. 33(4), 155 (2014)Google Scholar
  62. 62.
    Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph. 37(4), 1–12 (2018)Google Scholar
  63. 63.
    Zhou, Z., et al.: Sparse-as-possible SVBRDF acquisition. ACM Trans. Graph. 35(6), 189 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of CaliforniaSan DiegoUSA
  2. 2.Adobe ResearchSan JoseUSA

Personalised recommendations