Advertisement

Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone Image

  • Zhengqin LiEmail author
  • Kalyan Sunkavalli
  • Manmohan Chandraker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11207)

Abstract

We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing high-frequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.

Notes

Acknowledgements

Z. Li and M. Chandraker are supported by NSF CAREER 1751365 and gratefully acknowledge funding from Adobe, BASF, Cognex and Snap. This work was partially done during Z. Li’s summer internship at Adobe.

Supplementary material

474178_1_En_5_MOESM1_ESM.pdf (18.5 mb)
Supplementary material 1 (pdf 18939 KB)
474178_1_En_5_MOESM2_ESM.mp4 (27.7 mb)
Supplementary material 2 (mp4 28356 KB)

References

  1. 1.
    Aittala, M., Aila, T., Lehtinen, J.: Reflectance modeling by neural texture synthesis. ACM Trans. Graph. (TOG) 35(4), 65 (2016)CrossRefGoogle Scholar
  2. 2.
    Aittala, M., Weyrich, T., Lehtinen, J., et al.: Two-shot SVBRDF capture for stationary materials. ACM Trans. Graph. 34(4), 110:1–110:13 (2015)CrossRefGoogle Scholar
  3. 3.
    Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Computer Vision and Pattern Recognition (CVPR) (2015)Google Scholar
  4. 4.
    Blinn, J.F., Newell, M.E.: Texture and reflection in computer generated images. Commun. ACM 19(10), 542–547 (1976)CrossRefGoogle Scholar
  5. 5.
    Chandraker, M.: On shape and material recovery from motion. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 202–217. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10584-0_14CrossRefGoogle Scholar
  6. 6.
    Chandraker, M.: What camera motion reveals about shape with unknown BRDF. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2171–2178 (2014)Google Scholar
  7. 7.
    Chandraker, M.: The information available to a moving observer on shape with unknown, isotropic BRDFs. IEEE Trans. Pattern Anal. Mach. Intell. 38(7), 1283–1297 (2016)CrossRefGoogle Scholar
  8. 8.
    Chang, A.X., et al.: Shapenet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)
  9. 9.
    Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_38CrossRefGoogle Scholar
  10. 10.
    Cook, R.L., Torrance, K.E.: A reflectance model for computer graphics. ACM Trans. Graph. (TOG) 1(1), 7–24 (1982)CrossRefGoogle Scholar
  11. 11.
    Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 145–156. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  12. 12.
    Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658 (2015)Google Scholar
  13. 13.
    Georgoulis, S., Rematas, K., Ritschel, T., Fritz, M., Van Gool, L., Tuytelaars, T.: Delight-net: decomposing reflectance maps into specular materials and natural illumination. arXiv preprint arXiv:1603.08240 (2016)
  14. 14.
    Hui, Z., Sankaranarayanan, A.C.: A dictionary-based approach for estimating shape and spatially-varying reflectance. In: International Conference on Computational Photography (ICCP) (2015)Google Scholar
  15. 15.
    Hui, Z., Sunkavalli, K., Lee, J.Y., Hadap, S., Wang, J., Sankaranarayanan, A.C.: Reflectance capture using univariate sampling of BRDFs. In: IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  16. 16.
    Karis, B., Games, E.: Real shading in unreal engine 4. SIGGRAPH 2013 Crourse:c. Physically Based Shading Theory Practice (2013)Google Scholar
  17. 17.
    Kim, K., Gu, J., Tyree, S., Molchanov, P., Nießner, M., Kautz, J.: A lightweight approach for on-the-fly reflectance estimation. arXiv preprint arXiv:1705.07162 (2017)
  18. 18.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  19. 19.
    Li, X., Dong, Y., Peers, P., Tong, X.: Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Trans. Graph. 36(4), 1–11 (2017).  https://doi.org/10.1145/3072959.3073641CrossRefGoogle Scholar
  20. 20.
    Liu, G., Ceylan, D., Yumer, E., Yang, J., Lien, J.M.: Material editing using a physically based rendering network. In: ICCV (2017)Google Scholar
  21. 21.
    Marschner, S.R., Westin, S.H., Lafortune, E.P., Torrance, K.E., Greenberg, D.P.: Image-based BRDF measurement including human skin. In: Lischinski, D., Larson, G.W. (eds.) Rendering Techniques 1999. EUROGRAPH, pp. 131–144. Springer, Vienna (1999).  https://doi.org/10.1007/978-3-7091-6809-7_13CrossRefGoogle Scholar
  22. 22.
    Matusik, W., Pfister, H., Brand, M., McMillan, L.: A data-driven reflectance model. ACM Trans. Graph. (TOG) 22(3), 759–769 (2003)CrossRefGoogle Scholar
  23. 23.
    Narihira, T., Maire, M., Yu, S.X.: Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. In: Proceedings of the IEEE International Conference on Computer Vision, p. 2992 (2015)Google Scholar
  24. 24.
    Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33715-4_54CrossRefGoogle Scholar
  25. 25.
    Nicodemus, F.E.: Directional reflectance and emissivity of an opaque surface. Appl. Opt. 4(7), 767–775 (1965)CrossRefGoogle Scholar
  26. 26.
    Nielsen, J.B., Jensen, H.W., Ramamoorthi, R.: On optimal, minimal BRDF sampling for reflectance acquisition. ACM Trans. Graph. (TOG) 34(6), 186 (2015)CrossRefGoogle Scholar
  27. 27.
    Oren, M., Nayar, S.K.: Generalization of the Lambertian model and implications for machine vision. Int. J. Comput. Vis. (IJCV) 14(3), 227–251 (1995)CrossRefGoogle Scholar
  28. 28.
    Oxholm, G., Nishino, K.: Shape and reflectance estimation in the wild. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 38(2), 376–389 (2016)CrossRefGoogle Scholar
  29. 29.
    Rematas, K., Ritschel, T., Fritz, M., Gavves, E., Tuytelaars, T.: Deep reflectance maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4508–4516 (2016)Google Scholar
  30. 30.
    Ristovski, K., Radosavljevic, V., Vucetic, S., Obradovic, Z.: Continuous conditional random fields for efficient regression in large fully connected graphs. In: AAAI (2013)Google Scholar
  31. 31.
    Romeiro, F., Vasilyev, Y., Zickler, T.: Passive reflectometry. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 859–872. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88693-8_63CrossRefGoogle Scholar
  32. 32.
    Romeiro, F., Zickler, T.: Blind reflectometry. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 45–58. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_4CrossRefGoogle Scholar
  33. 33.
    Shelhamer, E., Barron, J.T., Darrell, T.: Scene intrinsics and depth from a single image. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 37–44 (2015)Google Scholar
  34. 34.
    Shi, J., Dong, Y., Su, H., Yu, S.X.: Learning non-Lambertian object intrinsics across shapenet categories. arXiv preprint arXiv:1612.08510 (2016)
  35. 35.
    Ward, G.J.: Measuring and modeling anisotropic reflection. In: ACM Transactions on Graphics (TOG), vol. 26 no. 2, pp. 265–272 (1992)CrossRefGoogle Scholar
  36. 36.
    Xu, D., Ricci, E., Ouyang, W., Wang, X., Sebe, N.: Multi-scale continuous CRFs as sequential deep networks for monocular depth estimation. arXiv preprint arXiv:1704.02157 (2017)
  37. 37.
    Xu, Z., Nielsen, J.B., Yu, J., Jensen, H.W., Ramamoorthi, R.: Minimal BRDF sampling for two-shot near-field reflectance acquisition. ACM Trans. Graph. (TOG) 35(6), 188 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Zhengqin Li
    • 1
    Email author
  • Kalyan Sunkavalli
    • 2
  • Manmohan Chandraker
    • 1
  1. 1.University of CaliforniaSan DiegoUSA
  2. 2.Adobe ResearchSan JoseUSA

Personalised recommendations