Reflectance and Shape Estimation with a Light Field Camera Under Natural Illumination
- 191 Downloads
Reflectance and shape are two important components in visually perceiving the real world. Inferring the reflectance and shape of an object through cameras is a fundamental research topic in the field of computer vision. While three-dimensional shape recovery is pervasive with varieties of approaches and practical applications, reflectance recovery has only emerged recently. Reflectance recovery is a challenging task that is usually conducted in controlled environments, such as a laboratory environment with a special apparatus. However, it is desirable that the reflectance be recovered in the field with a handy camera so that reflectance can be jointly recovered with the shape. To that end, we present a solution that simultaneously recovers the reflectance and shape (i.e., dense depth and normal maps) of an object under natural illumination with commercially available handy cameras. We employ a light field camera to capture one light field image of the object, and a 360-degree camera to capture the illumination. The proposed method provides positive results in both simulation and real-world experiments.
KeywordsLight field camera Natural illumination Reflectance Shape from shading
We thank anonymous reviewers for their suggestions on how to improve the quality of the manuscript and for an interesting discussion about a future work. We thank Glenn Pennycook, MSc, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript. Funding was provided by Japan Society for the Promotion of Science (Grant No. JP16H01675).
- Abrams, A. H., & Christopherand Pless, R. (2012). Heliometric stereo: Shape from sun position. In A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, & C. Schmid (Eds.), ECCV 2012 (pp. 357–370). Berlin: Springer.Google Scholar
- Ackermann, J., Langguth, F., Fuhrmann, S., & Goesele, M. (2012). Photometric stereo for outdoor webcams. In 2012 IEEE conference on computer vision and pattern recognition, (pp. 262–269). https://doi.org/10.1109/CVPR.2012.6247684.
- Alldrin, N., Zickler, T., & Kriegman, D. (2008). Photometric stereo with non-parametric and spatially-varying reflectance. In Proceedings of IEEE computer vision and pattern recognition (CVPR), p. 8. https://doi.org/10.1109/CVPR.2008.4587656.
- Ben-ezra, M., Wang, J., & Wilburn, B. (2008). An LED-only BRDF measurement device. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1–8).Google Scholar
- Delaunoy, A., & Pollefeys, M. (2014). Photometric bundle adjustment for dense multi-view 3D modeling. In IEEE computer society conference on computer vision and pattern recognition, (pp. 1486–1492).Google Scholar
- der Jeught, S. V., & Dirckx, J. J. J. (2016). Real-time structured light profilometry: A review. Optics and Lasers in Engineering, 87, 18–31. https://doi.org/10.1016/j.optlaseng.2016.01.011.
- Dror, R. O., Adelson, E. H., Willsky, A. S. (2001). Estimating surface reflectance properties from images under unknown illumination. In Proceedings of SPIE 4299, human vision and electronic imaging VI, vol. 4299, (pp. 231–243). https://doi.org/10.1117/12.429494.
- Godard, C., Hedman, P., Li, W., & Brostow, G.J. (2015). Multi-view reconstruction of highly specular surfaces in uncontrolled environments. In Proceedings—2015 international conference on 3D Vision, 3DV 2015 (pp. 19–27). https://doi.org/10.1109/3DV.2015.10.
- Goldman, D.B., Curless, B., Hertzmann, A., & Seitz, S.M. (2005). Shape and spatially-varying brdfs from photometric stereo. In Tenth IEEE international conference on computer vision (ICCV’05) vol 1, (pp. 341–348). https://doi.org/10.1109/ICCV.2005.219.
- Higo, T., Matsushita, Y., Ikeuchi, K.: Consensus photometric stereo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 1157–1164. IEEE (2010). https://doi.org/10.1109/CVPR.2010.5540084.
- Hold-Geoffroy, Y., Zhang, J., Gotardo, P.F.U., & Lalonde, J.F. (2015). What is a good day for outdoor photometric stereo? In 2015 IEEE international conference on computational photography (ICCP), (pp. 1–9). https://doi.org/10.1109/ICCPHOT.2015.7168379.
- Huang, R., & Smith, W. A. P. (2011). Shape-from-shading under complex natural illumination. In Proceedings—international conference on image processing, ICIP (pp. 13–16). https://doi.org/10.1109/ICIP.2011.6115701.
- Jeon, H. G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y. W., et al. (2015). Accurate depth map estimation from a lenslet light field camera. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12–June, 1547–1555. https://doi.org/10.1109/CVPR.2015.7298762.Google Scholar
- Johnson, M. K., & Adelson, E. H. (2011). Shape estimation in natural illumination. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition (pp. 2553–2560). https://doi.org/10.1109/CVPR.2011.5995510.
- Jung, J., Lee, J. Y., & Kweon, I. S. (2015). One-day outdoor photometric stereo via skylight estimation. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), (pp. 4521–4529). https://doi.org/10.1109/CVPR.2015.7299082.
- Mukaigawa, Y., Tagawa, S., Kim, J., Raskar, R., Matsushita, Y., & Yagi, Y. (2011). Hemispherical confocal imaging using turtleback reflector. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6492 LNCS (PART 1), pp. 336–349.Google Scholar
- Ngo, T., Nagahara, H., & Taniguchi, R. (2015). Shape and light directions from shading and polarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 2310–2318).Google Scholar
- Ngo, T., Nagahara, H., Nishino, K., Taniguchi, R., & Yagi, Y. (2017). Reflectance and shape estimation with a light field camera under natural illumination. In 28th British machine vision conference.Google Scholar
- Nishino, K., Zhang, Z., & Ikeuchi, K. (2001). Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis. In Proceedings Eighth IEEE international conference on computer vision. ICCV 2001, vol. 1, (pp. 599–606). https://doi.org/10.1109/ICCV.2001.937573.
- Oren, M., & Nayar, S. K. (1994). Generalization of Lambert’s reflectance model. In Proceedings of the 21st annual conference on computer graphics and interactive techniques—SIGGRAPH ’94 (pp. 239–246). https://doi.org/10.1145/192161.192213.
- Oxholm, G., & Nishino, K. (2014). Multiview shape and reflectance from natural illumination. In IEEE conference on computer vision and pattern recognition (pp. 2163–2170).Google Scholar
- Pharr, M., & Humphreys, G. (2010). Physically based rendering, second edition: From theory to implementation (2nd ed.). San Francisco, CA: Morgan Kaufmann Publishers Inc.Google Scholar
- Ramamoorthi, R., & Hanrahan, P. (2001). A signal-processing framework for inverse rendering. In Proceedings of the 28th annual conference on computer graphics and interactive techniques, SIGGRAPH ’01, (pp. 117–128). ACM, New York https://doi.org/10.1145/383259.383271.
- Ricoh Company. (2016). Theta S camera. https://theta360.com/en/about/theta/s.html.
- Rusinkiewicz, S. M. (1998). A new change of variables for efficient BRDF representation. In Rendering techniques, proceedings of eurographics rendering workshop (pp. 11–22).Google Scholar
- Seitz, S., & Dyer, C. (1997). Photorealistic scene reconstruction by voxel coloring. In Proceedings of computer vision and pattern recognition (pp. 1067–1073).Google Scholar
- Smith, W. A., Ramamoorthi, R., & Tozza, S. (2016). Linear depth estimation from an uncalibrated, monocular polarisation image. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9912 LNCS, pp. 109–125.Google Scholar
- Tao, M. W., & Srinivasan, P. P. (2015). Depth from shading, defocus, and correspondence using light-field angular coherence. In CVPR (pp. 1940–1948).Google Scholar
- Tao, M.W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). Depth from combining defocus and correspondence using light-field cameras. In 2013 IEEE international conference on computer vision 2, pp. 673–680. https://doi.org/10.1109/ICCV.2013.89.
- Tao, M. W., Wang, T. C., Malik, J., & Ramamoorthi, R. (2014). Depth estimation for glossy surfaces with light-field cameras. In ECCV workshop light fields computer vision, (pp. 1–14).Google Scholar
- USC Institute for Creative Technologies. (2008). High-Resolution Light Probe Image Gallery. http://gl.ict.usc.edu/Data/HighResProbes/.
- Wang, T. C., Chandraker, M., Efros, A. A., & Ramamoorthi, R. (2016). SVBRDF-Invariant shape and reflectance estimation from light-field cameras differential stereo. In CVPR (pp. 5451–5459). https://doi.org/10.1109/CVPR.2016.588.
- Wanner, S., & Goldluecke, B. (2012). Globally consistent depth labeling of 4D light fields. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition, (pp. 41–48) https://doi.org/10.1109/CVPR.2012.6247656.
- Wolff, L. B. (1996). Generalizing Lambert’s law for smooth surfaces. In ECCV, (pp. 40–53).Google Scholar
- Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. Optical Engineering. https://doi.org/10.1117/12.7972479.
- Yu, L. F., Yeung, S. K., Tai, Y. W., Terzopoulos, D., & Chan, T. F. (2013). Outdoor photometric stereo. In IEEE international conference on computational photography (ICCP), (pp. 1–8). https://doi.org/10.1109/ICCPhot.2013.6528306.
- Zhou, Z., Wu, Z., & Tan, P. (2013). Multi-view Photometric stereo with spatially varying isotropic materials. In 2013 IEEE conference on computer vision and pattern recognition, (pp. 1482–1489). https://doi.org/10.1109/CVPR.2013.195.