Skip to main content
Log in

A Survey on Intrinsic Images: Delving Deep into Lambert and Beyond

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Intrinsic imaging or intrinsic image decomposition has traditionally been described as the problem of decomposing an image into two layers: a reflectance, the albedo invariant color of the material; and a shading, produced by the interaction between light and geometry. Deep learning techniques have been broadly applied in recent years to increase the accuracy of those separations. In this survey, we overview those results in context of well-known intrinsic image data sets and relevant metrics used in the literature, discussing their suitability to predict a desirable intrinsic image decomposition. Although the Lambertian assumption is still a foundational basis for many methods, we show that there is increasing awareness on the potential of more sophisticated physically-principled components of the image formation process, that is, optically accurate material models and geometry, and more complete inverse light transport estimations. We classify these methods in terms of the type of decomposition, considering the priors and models used, as well as the learning architecture and methodology driving the decomposition process. We also provide insights about future directions for research, given the recent advances in neural, inverse and differentiable rendering techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2012). Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11), 2274–2282.

    Article  Google Scholar 

  • Azinovic, D., Li, T. M., Kaplanyan, A., & Niessner, M. (2019). Inverse path tracing for joint material and lighting estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition

  • Balzer, J., Höfer, S., & Beyerer, J. (2011). Multiview specular stereo reconstruction of large mirror surfaces. CVPR, 2011, 2537–2544. https://doi.org/10.1109/CVPR.2011.5995346.

    Article  Google Scholar 

  • Barron, J. T., & Poole, B. (2016). The fast bilateral solver. In European conference on computer vision (pp. 617–632). Springer

  • Barron, J. T., & Malik, J. (2014). Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(8), 1670–1687.

    Article  Google Scholar 

  • Barrow, H., Tenenbaum, J., Hanson, A., & Riseman, E. (1978). Recovering intrinsic scene characteristics. Computer Vision Systems, 2(3–26), 2.

    Google Scholar 

  • Baslamisli, A. S., Le, H. A., & Gevers, T. (2018). Cnn based learning using reflection and retinex models for intrinsic image decomposition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6674–6683)

  • Beigpour, S., & Van De Weijer, J. (2011). Object recoloring based on intrinsic image estimation. In Proceedings of the IEEE international conference on computer vision (pp. 327–334). IEEE

  • Bell, S., Bala, K., & Snavely, N. (2014). Intrinsic images in the wild. ACM Transactions on Graphics (TOG), 33(4), 1–12.

    Article  Google Scholar 

  • Bi, S., Kalantari, N. K., & Ramamoorthi, R. (2018). Deep hybrid real and synthetic training for intrinsic decomposition. In Computer Graphics Forum (Proceedings of eurographics symposium on rendering)

  • Bi, S., Han, X., & Yu, Y. (2015). An l 1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition. ACM Transactions on Graphics (TOG), 34(4), 1–12.

    Article  Google Scholar 

  • Blanz, V., & Vetter, T. (1999). A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on computer graphics and interactive techniques (pp. 187–194)

  • Bonneel, N., Kovacs, B., Paris, S., & Bala, K. (2017). Intrinsic decompositions for image editing. In Computer graphics forum (Proceedings of eurographics star) (Vol. 36, pp. 593–609)

  • Bousseau, A., Paris, S., & Durand, F. (2009). User-assisted intrinsic images. In: Proceedings of the 2009 SIGGRAPH Asia conference (pp. 1–10)

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165

  • Brust, C. A., Käding, C., & Denzler, J. (2018). Active learning for deep object detection. arXiv preprint arXiv:1809.09875

  • Butler, D. J., Wulff, J., Stanley, G. B., & Black, M. J. (2012). A naturalistic open source movie for optical flow evaluation. In Proceedings of the European conference on computer vision (ECCV) (pp. 611–625). Springer.

  • Carroll, R., Ramamoorthi, R., & Agrawala, M. (2011). Illumination decomposition for material recoloring with consistent interreflections. In ACM SIGGRAPH 2011 papers (pp. 1–10)

  • Chaitanya, C. R. A., Kaplanyan, A. S., Schied, C., Salvi, M., Lefohn, A., Nowrouzezahrai, D., & Aila, T. (2017). Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder. ACM Transactions on Graphics (TOG), 36(4), 1–12.

    Article  Google Scholar 

  • Chang, J., Cabezas, R., & Fisher, J. W. (2014). Bayesian nonparametric intrinsic image decomposition. In Proceedings of the European conference on computer vision (ECCV) (pp. 704–719). Springer

  • Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al. (2015). Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012

  • Chen, Q., & Koltun, V. (2013). A simple model for intrinsic image decomposition with depth cues. In Proceedings of the IEEE international conference on computer vision (pp. 241–248)

  • Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709

  • Cheng, L., Zhang, C., & Liao, Z. (2018). Intrinsic image transformation via scale space decomposition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 656–665)

  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 248–255). IEEE

  • Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., & Bousseau, A. (2019). Flexible svbrdf capture with a multi-image deep network. In Computer graphics forum, (Vol. 38, pp. 1–13). Wiley Online Library

  • Deschaintre, V., Drettakis, G., & Bousseau, A. (2020). Guided fine-tuning for large-scale material transfer. In Computer Graphics Forum (Proceedings of the eurographics symposium on rendering), 39(4). http://www-sop.inria.fr/reves/Basilic/2020/DDB20

  • Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., & Bousseau, A. (2018). Single-image svbrdf capture with a rendering-aware deep network. ACM Transactions on Graphics (ToG), 37(4), 1–15.

    Article  Google Scholar 

  • Dong, B., Dong, Y., Tong, X., & Peers, P. (2015). Measurement-based editing of diffuse albedo with consistent interreflections. ACM Transactions on Graphics, 34(4)

  • Dong, B., Moore, K. D., Zhang, W., & Peers, P. (2014). Scattering parameters and surface normals from homogeneous translucent materials using photometric stereo. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2291–2298)

  • Dong, Y., Tong, X., Pellacini, F., & Guo, B. (2011) Appgen: Interactive material modeling from a single image. In Proceedings of the 2011 SIGGRAPH Asia conference (pp. 1–10)

  • Dong, Y. (2019). Deep appearance modeling: A survey. Visual Informatics, 3(2), 59–68.

    Article  Google Scholar 

  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929

  • Duchêne, S., Riant, C., Chaurasia, G., Moreno, J. L., Laffont, P. Y., Popov, S., Bousseau, A., & Drettakis, G. (2015). Multiview intrinsic images of outdoors scenes with an application to relighting. ACM Transactions on Graphics (TOG), 34(5)

  • Erofeev, M., Gitman, Y., Vatolin, D., Fedorov, A., & Wang, J. (2015). Perceptually motivated benchmark for video matting. In Proceedings of the British machine vision conference (BMVC) (pp. 99.1–99.12). BMVA Press

  • Fan, Q., Yang, J., Hua, G., Chen, B., & Wipf, D. (2018). Revisiting deep intrinsic image decompositions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8944–8952)

  • Finlayson, G. D., Drew, M. S., & Lu, C. (2004). Intrinsic images by entropy minimization. In Proceedings of the European conference on computer vision (ECCV) (pp. 582–595). Springer

  • Gao, D., Li, X., Dong, Y., Peers, P., Xu, K., & Tong, X. (2019). Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images. ACM Transactions on Graphics (TOG), 38(4), 1–15.

    Article  Google Scholar 

  • Garces, E., Munoz, A., Lopez-Moreno, J., & Gutierrez, D. (2012). Intrinsic images by clustering. Computer Graphics Forum, 31, 1415–1424.

    Article  Google Scholar 

  • Gastal, E. S., & Oliveira, M. M. (2012). Adaptive manifolds for real-time high-dimensional filtering. ACM Transactions on Graphics (TOG), 31(4), 1–13.

    Article  Google Scholar 

  • Gatys, L. A., Ecker, A. S., Bethge, M., Hertzmann, A., & Shechtman, E. (2017). Controlling perceptual factors in neural style transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3985–3993)

  • Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728

  • Godard, C., Hedman, P., Li, W., & Gabriel, J. (2015). Multi-view reconstruction of highly specular surfaces in uncontrolled environments. In 3DV. https://doi.org/10.1109/3DV.2015.10

  • Grosse, R., Johnson, M. K., Adelson, E. H., & Freeman, W. T. (2009). Ground truth dataset and baseline evaluations for intrinsic image algorithms. In Proceedings of the IEEE international conference on computer vision (pp. 2335–2342). IEEE

  • Guarnera, D., Guarnera, G., Ghosh, A., Denk, C., & Glencross, M. (2016). Brdf representation and acquisition. Computer Graphics Forum, 35(2), 625–650.

    Article  Google Scholar 

  • Guo, Y., Smith, C., Hašan, M., Sunkavalli, K., & Zhao, S. (2020). Materialgan: Reflectance capture using a generative svbrdf model. ACM Transactions on Graphics, 39(6), 254:1-254:13.

    Article  Google Scholar 

  • Han, X., Laga, H., & Bennamoun, M. (2019). Image-based 3d object reconstruction: State-of-the-art and trends in the deep learning era. IEEE Transactions on Pattern Analysis and Machine Intelligence

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778)

  • Hill, S., McAuley, S., Burley, B., Chan, D., Fascione, L., Iwanicki, M., Hoffman, N., Jakob, W., Neubelt, D., Pesce, A., & Pettineo, M. (2015). Physically based shading in theory and practice. In: ACM SIGGRAPH 2015 courses, SIGGRAPH ’15. Association for Computing Machinery, New York, NY, USA

  • Horn, B. K. (1974). Determining lightness from an image. Computer Graphics and Image Processing, 3(4), 277–299.

    Article  Google Scholar 

  • Horn, B. K., & Sjoberg, R. W. (1979). Calculating the reflectance map. Applied Optics, 18(11), 1770–1779.

    Article  Google Scholar 

  • Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501–1510)

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134)

  • Jakob, W. (2010). Mitsuba renderer

  • Janner, M., Wu, J., Kulkarni, T. D., Yildirim, I., & Tenenbaum, J. (2017). Self-supervised intrinsic image decomposition. In Advances in neural information processing systems (pp. 5936–5946)

  • Jensen, H. W., Marschner, S. R., Levoy, M., & Hanrahan, P. (2001). A practical model for subsurface light transport. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (pp. 511–518)

  • Jetley, S., Lord, N. A., Lee, N., & Torr, P. H. (2018). Learn to pay attention. arXiv preprint arXiv:1804.02391

  • Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European conference on computer vision (ECCV) (pp. 694–711). Springer

  • Kajiya, J. T. (1986). The rendering equation. In Proceedings of the 13th annual conference on computer graphics and interactive techniques (pp. 143–150)

  • Kanamori, Y., & Endo, Y. (2018). Relighting humans: occlusion-aware inverse rendering for full-body human images. ACM Transactions on Graphics (TOG), 37(6), 1–11.

    Article  Google Scholar 

  • Karis, B., & Games, E. (2013). Real shading in unreal engine 4. Proceedings of Physically Based Shading Theory Practice, 4, 3.

    Google Scholar 

  • Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8110–8119)

  • Kovacs, B., Bell, S., Snavely, N., & Bala, K. (2017). Shading annotations in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6998–7007)

  • Krähenbühl, P. (2018). Free supervision from video games. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2955–2964)

  • Krähenbühl, P., & Koltun, V. (2011). Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems (pp. 109–117)

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105)

  • Laffont, P. Y., & Bazin, J. C. (2015). Intrinsic decomposition of image sequences from local temporal variations. In Proceedings of the IEEE international conference on computer vision (pp. 433–441)

  • Laffont, P. Y., Bousseau, A., Paris, S., Durand, F., & Drettakis, G. (2012). Coherent intrinsic images from photo collections. ACM Transactions on Graphics (TOG), 31(6), 1–11.

    Article  Google Scholar 

  • Lafortune, E. P., & Willems, Y. D. (1994). Using the modified phong reflectance model for physically based rendering

  • Land, E. H., & McCann, J. J. (1971). Lightness and retinex theory. JOSA, 61(1), 1–11.

    Article  Google Scholar 

  • LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4), 541–551.

    Article  Google Scholar 

  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

    Article  Google Scholar 

  • Lettry, L., Vanhoey, K., & Van Gool, L. (2018). Darn: A deep adversarial residual network for intrinsic image decomposition. In 2018 IEEE winter conference on applications of computer vision (WACV) (pp. 1359–1367). IEEE

  • Li, Z., & Snavely, N. (2018a). Cgintrinsics: Better intrinsic image decomposition through physically-based rendering. In Proceedings of the European conference on computer vision (ECCV) (pp. 371–387)

  • Li, Z., & Snavely, N. (2018b). Learning intrinsic image decomposition from watching the world. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9039–9048)

  • Li, Z., & Snavely, N. (2018c). Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2041–2050)

  • Li, Z., Shafiei, M., Ramamoorthi, R., Sunkavalli, K., & Chandraker, M. (2020). Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2475–2484)

  • Li, Z., Sunkavalli, K., & Chandraker, M. (2018a). Materials for masses: Svbrdf acquisition with a single mobile phone image. In Proceedings of the European conference on computer vision (ECCV) (pp. 72–87)

  • Li, T. M., Aittala, M., Durand, F., & Lehtinen, J. (2018b). Differentiable monte carlo ray tracing through edge sampling. ACM Transactions on Graphics, 37(6), 222:1-222:11.

    Google Scholar 

  • Li, X., Dong, Y., Peers, P., & Tong, X. (2017). Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Transactions on Graphics (ToG), 36(4), 1–11.

    Article  Google Scholar 

  • Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In: Proceedings of the European conference on computer vision (ECCV) (pp. 740–755). Springer

  • Liu, Y., Li, Y., You, S., & Lu, F. (2020). Unsupervised learning for intrinsic image decomposition from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition

  • Li, Z., Xu, Z., Ramamoorthi, R., Sunkavalli, K., & Chandraker, M. (2018). Learning to reconstruct shape and spatially-varying reflectance from a single image. ACM Transactions on Graphics (TOG), 37(6), 1–11.

    Google Scholar 

  • Loubet, G., Holzschuch, N., & Jakob, W. (2019). Reparameterizing discontinuous integrands for differentiable rendering. ACM Transactions on Graphics (TOG), 38(6), 1–14.

    Article  Google Scholar 

  • Ma, W. C., Chu, H., Zhou, B., Urtasun, R., & Torralba, A. (2018). Single image intrinsic decomposition without a single intrinsic image. In Proceedings of the European conference on computer vision (ECCV) (pp. 201–217)

  • Martin-Brualla, R., Radwan, N., Sajjadi, M. S., Barron, J. T., Dosovitskiy, A., & Duckworth, D. (2020). Nerf in the wild: Neural radiance fields for unconstrained photo collections. arXiv preprint arXiv:2008.02268

  • Maxwell, B. A., Friedhoff, R. M., & Smith, C. A. (2008). A bi-illuminant dichromatic reflection model for understanding images. In 2008 IEEE conference on computer vision and pattern recognition (pp. 1–8). IEEE

  • Meka, A., Häne, C., Pandey, R., Zollhöfer, M., Fanello, S., Fyffe, G., Kowdle, A., Yu, X., Busch, J., Dourgarian, J., Denny, P., Bouaziz, S., Lincoln, P., Whalen, M., Harvey, G., Taylor, J., Izadi, S., Tagliasacchi, A., Debevec, P., Theobalt, C., Valentin, J., & Rhemann, C. (2019). Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination. ACM Transactions on Graphics (TOG) 38,(4)

  • Meka, A., Maximov, M., Zollhoefer, M., Chatterjee, A., Seidel, H. P., Richardt, C., & Theobalt, C. (2018). Lime: Live intrinsic material estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6315–6324)

  • Meka, A., Zollhöfer, M., Richardt, C., & Theobalt, C. (2016). Live intrinsic video. ACM Transactions on Graphics (TOG) 35, (4)

  • Merzbach, S., & Klein, R. (2020). Bonn appearance benchmark. The Eurographics Association

  • Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T ., Ramamoorthi, R., & Ng, R. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition

  • Müller, C. (2006). Spherical harmonics, vol. 17. Springer

  • Narihira, T., Maire, M., & Yu, S. X. (2015a). Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. In Proceedings of the IEEE international conference on computer vision

  • Narihira, T., Maire, M., & Yu, S. X. (2015b). Learning lightness from human judgement on relative reflectance. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2965–2973)

  • Nestmeyer, T., & Gehler, P. V. (2017). Reflectance adaptive filtering improves intrinsic image estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6789–6798)

  • Nestmeyer, T., Lalonde, J. F., Matthews, I., & Lehrmann, A. (2020). Learning physics-guided face relighting under directional light. In Proceedings of the IEEE conference on computer vision and pattern recognition

  • Newell, A., Yang, K., & Deng, J. (2016). Stacked hourglass networks for human pose estimation. In Proceedings of the European conference on computer vision (ECCV) (pp. 483–499). Springer

  • Nimier-David, M., Vicini, D., Zeltner, T., & Jakob, W. (2019). Mitsuba 2: A retargetable forward and inverse renderer. ACM Transactions on Graphics (TOG), 38(6), 1–17.

    Article  Google Scholar 

  • Oh, B. M., Chen, M., Dorsey, J., & Durand, F. (2001). Image-based modeling and photo editing. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (pp. 433–442)

  • Omer, I., & Werman, M. (2004). Color lines: Image specific color representation. In Proceedings of the IEEE conference on computer vision and pattern recognition (Vol. 2, pp. II–II). IEEE

  • Patow, G., & Pueyo, X. (2003). A survey of inverse rendering problems. Computer Graphics Forum, 22, 663–687.

    Article  Google Scholar 

  • Pharr, M., Jakob, W., & Humphreys, G. (2016). Physically based rendering: From theory to implementation (3rd ed.). San Francisco: Morgan Kaufmann Publishers Inc.

    Google Scholar 

  • Poole, B., & Barron, J. T. (2016). The fast bilateral solver. In Proceedings of the European conference on computer vision (ECCV) (pp. 617–632)

  • Ramamoorthi, R., & Hanrahan, P. (2001). An efficient representation for irradiance environment maps. In Proceedings of the 28th annual conference on computer graphics and interactive techniques, SIGGRAPH ’01 (pp. 497–500). Association for Computing Machinery, New York, NY, USA

  • Rebuffi, S. A., Bilen, H., & Vedaldi, A. (2017). Learning multiple visual domains with residual adapters. In Advances in neural information processing systems (pp. 506–516)

  • Rematas, K., Ritschel, T., Fritz, M., Gavves, E., & Tuytelaars, T. (2016). Deep reflectance maps. In Proceedings of the IEEE Conference on computer vision and pattern recognition (pp. 4508–4516)

  • Rhemann, C., Rother, C., Wang, J., Gelautz, M., Kohli, P., & Rott, P. (2009). A perceptually motivated online benchmark for image matting. In 2009 IEEE conference on computer vision and pattern recognition (pp. 1826–1833). IEEE

  • Ritschel, T., Dachsbacher, C., Grosch, T., & Kautz, J. (2012). The state of the art in interactive global illumination. Computer Graphics Forum, 31 (1)

  • Robust Vision Challenge 2020. Accessed 31 Oct 2020.

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 234–241). Springer

  • Rother, C., Kiefel, M., Zhang, L., Schölkopf, B., & Gehler, P. V. (2011). Recovering intrinsic images with a global sparsity prior on reflectance. In Advances in neural information processing systems (pp. 765–773)

  • Seitz, S., Matsushita, Y., & Kutulakos, K. (2005). A theory of inverse light transport. In Tenth IEEE international conference on computer vision (ICCV’05) (Vol. 2, pp. 1440–1447)

  • Sengupta, S., Gu, J., Kim, K., Liu, G., Jacobs, D. W., & Kautz, J. (2019). Neural inverse rendering of an indoor scene from a single image. In Proceedings of the IEEE international conference on computer vision (pp. 8598–8607)

  • Sengupta, S., Kanazawa, A., Castillo, C. D., & Jacobs, D. W. (2018). Sfsnet: Learning shape, reflectance and illuminance of facesin the wild’. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6296–6305

  • Shen, L., & Yeo, C. (2011). Intrinsic images decomposition using a local and global sparse representation of reflectance. In cvpr 2011 (pp. 697–704). IEEE

  • Shi, J., Dong, Y., Su, H., & Yu, S. X. (2017). Learning non-lambertian object intrinsics across shapenet categories. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1685–1694)

  • Shu, Z., Yumer, E., Hadap, S., Sunkavalli, K., Shechtman, E., & Samaras, D. (2017). Neural face editing with intrinsic image disentangling. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5541–5550)

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Sloan, P. P., Kautz, J., & Snyder, J. (2002). Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. ACM Transactions on Graphics (TOG), 21(3), 527–536.

    Article  Google Scholar 

  • Song, S., Yu, F., Zeng, A., Chang, A. X., Savva, M., & Funkhouser, T. (2017). Semantic scene completion from a single depth image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1746–1754)

  • Sun, T., Barron, J. T., Tsai, Y. T., Xu, Z., Yu, X., Fyffe, G., Rhemann, C., Busch, J., Debevec, P., & Ramamoorthi, R. (2019). Single image portrait relighting. ACM Transactions on Graphics (TOG), 38 (4)

  • Sunkavalli, K., Matusik, W., Pfister, H., & Rusinkiewicz, S. (2007). Factored time-lapse video. In ACM SIGGRAPH 2007 papers

  • Tappen, M. F., Freeman, W. T., & Adelson, E. H. (2003). Recovering intrinsic images from a single image. In Advances in neural information processing systems (pp. 1367–1374)

  • Tewari, A., Bernard, F., Garrido, P., Bharaj, G., Elgharib, M., Seidel, H. P., Pérez, P., Zöllhofer, M., & Theobalt, C. (2019). FML: Face model learning from videos. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 10812–10822)

  • Tewari, A., Zollhöfer, M., Garrido, P., Bernard, F., Kim, H., Pérez, P., & Theobalt, C. (2018). Self-supervised multi-level face model learning for monocular reconstruction at over 250 hz. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2549–2559)

  • Tewari, A., Zollöfer, M., Kim, H., Garrido, P., Bernard, F., Perez, P., & Christian, T. (2017). MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proceedings of the IEEE international conference on computer vision

  • Tewari, A., Fried, O., Thies, J., Sitzmann, V., Lombardi, S., Sunkavalli, K., et al. (2020). State of the Art on Neural Rendering. Computer Graphics Forum, 39(2), 701–727.

    Article  Google Scholar 

  • Tomasi, C., & Manduchi, R. (1998). Bilateral filtering for gray and color images. In Proceedings of the IEEE international conference on computer vision (pp. 839–846). IEEE

  • Tominaga, S. (1994). Dichromatic reflection models for a variety of materials. Color Research& Application, 19(4), 277–285.

    Article  Google Scholar 

  • Torrance, K. E., & Sparrow, E. M. (1967). Theory for off-specular reflection from roughened surfaces. JOSA, 57(9), 1105–1114.

    Article  Google Scholar 

  • Vidaurre, R., Casas, D., Garces, E., & Lopez-Moreno, J. (2019). Brdf estimation of complex materials with nested learning. In IEEE Winter conference on applications of computer vision (WACV)

  • Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7794–7803)

  • Wang, J., Ren, P., Gong, M., Snyder, J., & Guo, B. (2009). All-frequency rendering of dynamic, spatially-varying reflectance. ACM Transactions on Graphics, 28(5), 1–10.

    Google Scholar 

  • Wang, K., Zhang, D., Li, Y., Zhang, R., & Lin, L. (2016). Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, 27(12), 2591–2600.

    Article  Google Scholar 

  • Weiss, Y. (2001). Deriving intrinsic images from image sequences. In Proceedings of the IEEE international conference on computer vision (Vol. 2, pp. 68–75). IEEE

  • Yamaguchi, S., Saito, S., Nagano, K., Zhao, Y., Chen, W., Olszewski, K., et al. (2018). High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Transactions on Graphics (TOG), 37(4), 1–14.

    Article  Google Scholar 

  • Ye, G., Garces, E., Liu, Y., Dai, Q., & Gutierrez, D. (2014). Intrinsic video and applications. ACM Transactions on Graphics (TOG), 33(4), 1–90.

    Article  Google Scholar 

  • Yu, Y., & Smith, W. A. (2019). Inverserendernet: Learning single image inverse rendering. In Proceedings of the IEEE Conference on computer vision and pattern recognition (pp. 3155–3164)

  • Zhang, Y., Chen, W., Ling, H., Gao, J., Zhang, Y., Torralba, A., & Fidler, S. (2020a). Image gans meet differentiable rendering for inverse graphics and interpretable 3d neural rendering. arXiv preprint arXiv:2010.09125

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586–595)

  • Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Zhang, Z., Lin, H., Sun, Y., He, T., Mueller, J., Manmatha, R., et al. (2020b). Resnest: Split-attention networks. arXiv preprint arXiv:2004.08955

  • Zhao, S., Jakob, W., & Li, T. M. (2020). Physics-based differentiable rendering: from theory to implementation. In ACM siggraph 2020 courses (pp. 1–30)

  • Zhao, Q., Tan, P., Dai, Q., Shen, L., Wu, E., & Lin, S. (2012). A closed-form solution to retinex with nonlocal texture constraints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7), 1437–1444.

    Article  Google Scholar 

  • Zhou, H., Hadap, S., Sunkavalli, K., & Jacobs, D. W. (2019a). Deep single-image portrait relighting. In Proceedings of the IEEE international conference on computer vision (pp. 7194–7202)

  • Zhou, T., Krahenbuhl, P., & Efros, A. A. (2015). Learning data-driven reflectance priors for intrinsic image decomposition. In Proceedings of the IEEE international conference on computer vision (pp. 3469–3477)

  • Zhou, H., Yu, X., & Jacobs, D. W. (2019b). Glosh: Global-local spherical harmonics for intrinsic image decomposition. In Proceedings of the IEEE international conference on computer vision (pp. 7820–7829)

  • Zollhöfer, M., Thies, J., Garrido, P., Bradley, D., Beeler, T., Pérez, P., et al. (2018). State of the art on monocular 3d face reconstruction, tracking, and applications. Computer Graphics Forum, 37, 523–550.

    Article  Google Scholar 

  • Zoran, D., Isola, P., Krishnan, D., & Freeman, W. T. (2015). Learning ordinal relationships for mid-level vision. In Proceedings of the IEEE international conference on computer vision (pp. 388–396)

Download references

Acknowledgements

Elena Garces was partially supported by a Torres Quevedo Fellowship (PTQ2018-009868). The work was also funded in part by the Spanish Ministry of Science (RTI2018-098694-B-I00 VizLearning).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elena Garces.

Additional information

Communicated by Rei Kawakami.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Garces, E., Rodriguez-Pardo, C., Casas, D. et al. A Survey on Intrinsic Images: Delving Deep into Lambert and Beyond. Int J Comput Vis 130, 836–868 (2022). https://doi.org/10.1007/s11263-021-01563-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-021-01563-8

Keywords

Navigation