Advertisement

Reflectance and Shape Estimation with a Light Field Camera Under Natural Illumination

  • Thanh-Trung NgoEmail author
  • Hajime Nagahara
  • Ko Nishino
  • Rin-ichiro Taniguchi
  • Yasushi Yagi
Article
  • 191 Downloads

Abstract

Reflectance and shape are two important components in visually perceiving the real world. Inferring the reflectance and shape of an object through cameras is a fundamental research topic in the field of computer vision. While three-dimensional shape recovery is pervasive with varieties of approaches and practical applications, reflectance recovery has only emerged recently. Reflectance recovery is a challenging task that is usually conducted in controlled environments, such as a laboratory environment with a special apparatus. However, it is desirable that the reflectance be recovered in the field with a handy camera so that reflectance can be jointly recovered with the shape. To that end, we present a solution that simultaneously recovers the reflectance and shape (i.e., dense depth and normal maps) of an object under natural illumination with commercially available handy cameras. We employ a light field camera to capture one light field image of the object, and a 360-degree camera to capture the illumination. The proposed method provides positive results in both simulation and real-world experiments.

Keywords

Light field camera Natural illumination Reflectance Shape from shading 

Notes

Acknowledgements

We thank anonymous reviewers for their suggestions on how to improve the quality of the manuscript and for an interesting discussion about a future work. We thank Glenn Pennycook, MSc, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript. Funding was provided by Japan Society for the Promotion of Science (Grant No. JP16H01675).

References

  1. Abrams, A. H., & Christopherand Pless, R. (2012). Heliometric stereo: Shape from sun position. In A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, & C. Schmid (Eds.), ECCV 2012 (pp. 357–370). Berlin: Springer.Google Scholar
  2. Ackermann, J., Langguth, F., Fuhrmann, S., & Goesele, M. (2012). Photometric stereo for outdoor webcams. In 2012 IEEE conference on computer vision and pattern recognition, (pp. 262–269).  https://doi.org/10.1109/CVPR.2012.6247684.
  3. Alldrin, N., Zickler, T., & Kriegman, D. (2008). Photometric stereo with non-parametric and spatially-varying reflectance. In Proceedings of IEEE computer vision and pattern recognition (CVPR), p. 8.  https://doi.org/10.1109/CVPR.2008.4587656.
  4. Barron, J., & Malik, J. (2013). Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1(c), 1–19.  https://doi.org/10.1109/TPAMI.2014.2377712.Google Scholar
  5. Ben-ezra, M., Wang, J., & Wilburn, B. (2008). An LED-only BRDF measurement device. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1–8).Google Scholar
  6. Delaunoy, A., & Pollefeys, M. (2014). Photometric bundle adjustment for dense multi-view 3D modeling. In IEEE computer society conference on computer vision and pattern recognition, (pp. 1486–1492).Google Scholar
  7. der Jeught, S. V., & Dirckx, J. J. J. (2016). Real-time structured light profilometry: A review. Optics and Lasers in Engineering, 87, 18–31.  https://doi.org/10.1016/j.optlaseng.2016.01.011.
  8. Dror, R. O., Adelson, E. H., Willsky, A. S. (2001). Estimating surface reflectance properties from images under unknown illumination. In Proceedings of SPIE 4299, human vision and electronic imaging VI, vol. 4299, (pp. 231–243).  https://doi.org/10.1117/12.429494.
  9. Godard, C., Hedman, P., Li, W., & Brostow, G.J. (2015). Multi-view reconstruction of highly specular surfaces in uncontrolled environments. In Proceedings—2015 international conference on 3D Vision, 3DV 2015 (pp. 19–27).  https://doi.org/10.1109/3DV.2015.10.
  10. Goldman, D.B., Curless, B., Hertzmann, A., & Seitz, S.M. (2005). Shape and spatially-varying brdfs from photometric stereo. In Tenth IEEE international conference on computer vision (ICCV’05) vol 1, (pp. 341–348).  https://doi.org/10.1109/ICCV.2005.219.
  11. Hertzmann, A., & Seitz, S. M. (2005). Example-based photometric stereo: Shape reconstruction with general, varying BRDFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 1254–1264.  https://doi.org/10.1109/TPAMI.2005.158.CrossRefGoogle Scholar
  12. Higo, T., Matsushita, Y., Ikeuchi, K.: Consensus photometric stereo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 1157–1164. IEEE (2010).  https://doi.org/10.1109/CVPR.2010.5540084.
  13. Hold-Geoffroy, Y., Zhang, J., Gotardo, P.F.U., & Lalonde, J.F. (2015). What is a good day for outdoor photometric stereo? In 2015 IEEE international conference on computational photography (ICCP), (pp. 1–9).  https://doi.org/10.1109/ICCPHOT.2015.7168379.
  14. Huang, R., & Smith, W. A. P. (2011). Shape-from-shading under complex natural illumination. In Proceedings—international conference on image processing, ICIP (pp. 13–16).  https://doi.org/10.1109/ICIP.2011.6115701.
  15. Hui, Z., & Sankaranarayanan, A. C. (2017). Shape and spatially-varying reflectance estimation from virtual exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(10), 2060–2073.  https://doi.org/10.1109/TPAMI.2016.2623613.CrossRefGoogle Scholar
  16. Jeon, H. G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y. W., et al. (2015). Accurate depth map estimation from a lenslet light field camera. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12–June, 1547–1555.  https://doi.org/10.1109/CVPR.2015.7298762.Google Scholar
  17. Johnson, M. K., & Adelson, E. H. (2011). Shape estimation in natural illumination. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition (pp. 2553–2560).  https://doi.org/10.1109/CVPR.2011.5995510.
  18. Jung, J., Lee, J. Y., & Kweon, I. S. (2015). One-day outdoor photometric stereo via skylight estimation. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), (pp. 4521–4529).  https://doi.org/10.1109/CVPR.2015.7299082.
  19. Kolmogorov, V., Monasse, P., & Tan, P. (2014). Kolmogorov and Zabih’s graph cuts stereo matching algorithm. IPOL Journal, 4, 220–251.  https://doi.org/10.5201/ipol.2014.97.CrossRefGoogle Scholar
  20. Lombardi, S., & Nishino, K. (2016). Reflectance and illumination recovery in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1), 129–141.  https://doi.org/10.1109/TPAMI.2015.2430318.CrossRefGoogle Scholar
  21. Marschner, S. R., Westin, S. H., Lafortune, E. P., & Torrance, K. E. (2000). Image-based bidirectional reflectance distribution function measurement. Applied Optics, 39(16), 2592–2600.  https://doi.org/10.1364/AO.39.002592.CrossRefGoogle Scholar
  22. Matusik, W., Pfister, H., Brand, M., & McMillan, L. (2003). A data-driven reflectance model. ACM Transactions on Graphics, 22(3), 759–769.  https://doi.org/10.1145/882262.882343.CrossRefGoogle Scholar
  23. Mukaigawa, Y., Tagawa, S., Kim, J., Raskar, R., Matsushita, Y., & Yagi, Y. (2011). Hemispherical confocal imaging using turtleback reflector. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6492 LNCS (PART 1), pp. 336–349.Google Scholar
  24. Ngo, T., Nagahara, H., & Taniguchi, R. (2015). Shape and light directions from shading and polarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 2310–2318).Google Scholar
  25. Ngo, T., Nagahara, H., Nishino, K., Taniguchi, R., & Yagi, Y. (2017). Reflectance and shape estimation with a light field camera under natural illumination. In 28th British machine vision conference.Google Scholar
  26. Nishino, K., Zhang, Z., & Ikeuchi, K. (2001). Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis. In Proceedings Eighth IEEE international conference on computer vision. ICCV 2001, vol. 1, (pp. 599–606).  https://doi.org/10.1109/ICCV.2001.937573.
  27. Nishino, K., & Lombardi, S. (2011). Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions. Journal of the Optical Society of America A, 28(1), 8–18.CrossRefGoogle Scholar
  28. Oren, M., & Nayar, S. K. (1994). Generalization of Lambert’s reflectance model. In Proceedings of the 21st annual conference on computer graphics and interactive techniques—SIGGRAPH ’94 (pp. 239–246).  https://doi.org/10.1145/192161.192213.
  29. Oxholm, G., & Nishino, K. (2014). Multiview shape and reflectance from natural illumination. In IEEE conference on computer vision and pattern recognition (pp. 2163–2170).Google Scholar
  30. Oxholm, G., & Nishino, K. (2012). Shape and reflectance from natural illumination. Computer Vision - ECCV, 2012(7572), 528–541.  https://doi.org/10.1109/CVPR.2014.277.Google Scholar
  31. Oxholm, G., & Nishino, K. (2016). Shape and reflectance estimation in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 376–389.CrossRefGoogle Scholar
  32. Patow, G., & Pueyo, X. (2003). A survey of inverse rendering problems. Computer Graphics Forum, 22(4), 663–687.  https://doi.org/10.1111/j.1467-8659.2003.00716.x.CrossRefzbMATHGoogle Scholar
  33. Pharr, M., & Humphreys, G. (2010). Physically based rendering, second edition: From theory to implementation (2nd ed.). San Francisco, CA: Morgan Kaufmann Publishers Inc.Google Scholar
  34. Phong, B. T. (1975). Illumination for computer generated pictures. Communications of the ACM, 18(6), 311–317.  https://doi.org/10.1145/360825.360839.CrossRefGoogle Scholar
  35. Ramamoorthi, R., & Hanrahan, P. (2001). A signal-processing framework for inverse rendering. In Proceedings of the 28th annual conference on computer graphics and interactive techniques, SIGGRAPH ’01, (pp. 117–128). ACM, New York  https://doi.org/10.1145/383259.383271.
  36. Ricoh Company. (2016). Theta S camera. https://theta360.com/en/about/theta/s.html.
  37. Rusinkiewicz, S. M. (1998). A new change of variables for efficient BRDF representation. In Rendering techniques, proceedings of eurographics rendering workshop (pp. 11–22).Google Scholar
  38. Seitz, S., & Dyer, C. (1997). Photorealistic scene reconstruction by voxel coloring. In Proceedings of computer vision and pattern recognition (pp. 1067–1073).Google Scholar
  39. Smith, W. A., Ramamoorthi, R., & Tozza, S. (2016). Linear depth estimation from an uncalibrated, monocular polarisation image. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9912 LNCS, pp. 109–125.Google Scholar
  40. Szeliski, R. (2010). Computer vision: Algorithms and applications. Berlin: Springer.zbMATHGoogle Scholar
  41. Tao, M. W., & Srinivasan, P. P. (2015). Depth from shading, defocus, and correspondence using light-field angular coherence. In CVPR (pp. 1940–1948).Google Scholar
  42. Tao, M.W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). Depth from combining defocus and correspondence using light-field cameras. In 2013 IEEE international conference on computer vision 2, pp. 673–680.  https://doi.org/10.1109/ICCV.2013.89.
  43. Tao, M. W., Wang, T. C., Malik, J., & Ramamoorthi, R. (2014). Depth estimation for glossy surfaces with light-field cameras. In ECCV workshop light fields computer vision, (pp. 1–14).Google Scholar
  44. Torrance, K. E., & Sparrow, E. M. (1967). Theory for off-specular reflection from roughened surfaces\(\ast \). J. Opt. Soc. Am., 57(9), 1105–1114.  https://doi.org/10.1364/JOSA.57.001105.CrossRefGoogle Scholar
  45. USC Institute for Creative Technologies. (2008). High-Resolution Light Probe Image Gallery. http://gl.ict.usc.edu/Data/HighResProbes/.
  46. Vogiatzis, G., & Cipolla, R. (2008). Multiview photometric stereo. PAMI, 30(3), 548–554.CrossRefGoogle Scholar
  47. Wang, T. C., Chandraker, M., Efros, A. A., & Ramamoorthi, R. (2016). SVBRDF-Invariant shape and reflectance estimation from light-field cameras differential stereo. In CVPR (pp. 5451–5459).  https://doi.org/10.1109/CVPR.2016.588.
  48. Wanner, S., & Goldluecke, B. (2012). Globally consistent depth labeling of 4D light fields. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition, (pp. 41–48)  https://doi.org/10.1109/CVPR.2012.6247656.
  49. Wolff, L. B. (1996). Generalizing Lambert’s law for smooth surfaces. In ECCV, (pp. 40–53).Google Scholar
  50. Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. Optical Engineering.  https://doi.org/10.1117/12.7972479.
  51. Xia, R., Dong, Y., Peers, P., & Tong, X. (2016). Recovering shape and spatially-varying surface reflectance under unknown illumination. ACM Transactions on Graphics, 35(6), 1–12.  https://doi.org/10.1145/2980179.2980248.CrossRefGoogle Scholar
  52. Yu, L. F., Yeung, S. K., Tai, Y. W., Terzopoulos, D., & Chan, T. F. (2013). Outdoor photometric stereo. In IEEE international conference on computational photography (ICCP), (pp. 1–8).  https://doi.org/10.1109/ICCPhot.2013.6528306.
  53. Zhou, Z., Wu, Z., & Tan, P. (2013). Multi-view Photometric stereo with spatially varying isotropic materials. In 2013 IEEE conference on computer vision and pattern recognition, (pp. 1482–1489).  https://doi.org/10.1109/CVPR.2013.195.
  54. Zhou, C., & Nayar, S. K. (2011). Computational cameras: Convergence of optics and processing. IEEE Transactions on Image Processing, 20(12), 3322–3340.  https://doi.org/10.1109/TIP.2011.2171700.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Thanh-Trung Ngo
    • 1
    Email author
  • Hajime Nagahara
    • 1
  • Ko Nishino
    • 2
  • Rin-ichiro Taniguchi
    • 3
  • Yasushi Yagi
    • 1
  1. 1.Osaka UniversityOsakaJapan
  2. 2.Kyoto UniversityKyotoJapan
  3. 3.Kyshu UniversityFukuokaJapan

Personalised recommendations