Lighting transfer across multiple views through local color transforms

  • 443 Accesses


We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions. We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences.


  1. [1]

    Pouli, T.; Reinhard, E. Progressive color transfer for images of arbitrary dynamic range. Computers & Graphics Vol. 35, No. 1, 67–80, 2011.

  2. [2]

    Pitie, F.; Kokaram, A. C.; Dahyot, R. Ndimensional probability density function transfer and its application to color transfer. In: Proceedings of the 10th IEEE International Conference on Computer Vision, Vol. 2, 1434–1439, 2005.

  3. [3]

    Reinhard, E.; Ashikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Computer Graphics and Applications Vol. 21, No. 5, 34–41, 2001.

  4. [4]

    Huang, H.-Z.; Zhang, S.-H.; Martin, R. R.; Hu, S.-M. Learning natural colors for image recoloring. Computer Graphics Forum Vol. 33, No. 7, 299–308, 2014.

  5. [5]

    Li, X.; Zhao, H.; Nie, G.; Huang, H. Image recoloring using geodesic distance based color harmonization. Computational Visual Media Vol. 1, No. 2, 143–155, 2015.

  6. [6]

    Luan, F.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. arXiv preprint arXiv:1703.07511, 2017.

  7. [7]

    Park, J.; Tai, Y.-W.; Sinha, S. N.; Kweon, I. S. Efficient and robust color consistency for community photo collections. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 430–438, 2016.

  8. [8]

    Ye, S.; Lu, S.-P.; Munteanu, A. Color correction for large-baseline multiview video. Signal Processing: Image Communication Vol. 53, 40–50, 2017.

  9. [9]

    Lu, S.-P.; Ceulemans, B.; Munteanu, A.; Schelkens, P. Spatio-temporally consistent color and structure optimization for multiview video color correction. IEEE Transactions on Multimedia Vol. 17, No. 5, 577–590, 2015.

  10. [10]

    Yu, Y.; Debevec, P.; Malik, J.; Hawkins, T. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 215–224, 1999.

  11. [11]

    Debevec, P.; Tchou, C.; Gardner, A.; Hawkins, T.; Poullis, C.; Stumpfel, J.; Jones, A.; Yun, N.; Einarsson, P.; Lundgren, T.; Fajardo, M.; Martinez, P. Estimating surface reactance properties of a complex scene under captured natural illumination. USC ICT Technical Report ICT-TR-06. 2004.

  12. [12]

    Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen- Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Transactions on Graphics Vol. 27, No. 5, Article No.116, 2008.

  13. [13]

    Yu, Y.; Malik, J. Recovering photometric properties of architectural scenes from photographs. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 207–217, 1998.

  14. [14]

    Laffont, P.-Y.; Bousseau, A.; Paris, S.; Durand, F.; Drettakis, G. Coherent intrinsic images from photo collections. ACM Transactions on Graphics Vol. 31, No. 6, Article No.202, 2012.

  15. [15]

    HaCohen, Y.; Shechtman, E.; Goldman, D. B.; Lischinski, D. Non-rigid dense correspondence with applications for image enhancement. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 70, 2011.

  16. [16]

    Shih, Y.; Paris, S.; Durand, F.; Freema, W. T. Datadriven hallucination of different times of day from a single outdoor photo. ACM Transactions on Graphics Vol. 32, No. 6, Article No.200, 2013.

  17. [17]

    Laffont, P.-Y.; Ren, Z.; Tao, X.; Qian, C.; Hays, J. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Transactions on Graphics Vol. 33, No. 4, Article No.145, 2014.

  18. [18]

    Martin-Brualla, R.; Gallup, D.; Seitz, S. M. Timelapse mining from internet photos. ACM Transactions on Graphics Vol. 34, No. 4, Article No.62, 2015.

  19. [19]

    Shen, X.; Tao, X.; Zhou, C.; Gao, H.; Jia, J. Regional foremost matching for internet scene images. ACM Transactions on Graphics Vol. 35, No. 6, Article No.178, 2016.

  20. [20]

    Levin, A.; Lischinski, D.; Weiss, Y. Colorization using optimization. ACM Transactions on Graphics Vol. 23, No. 3, 689–694, 2004.

  21. [21]

    Liu, X.; Wan, L.; Qu, Y.; Wong, T.-T.; Lin, S.; Leung, C.-S.; Heng, P.-A. Intrinsic colorization. ACM Transactions on Graphics Vol. 27, No. 5, Article No.152, 2008.

  22. [22]

    Lischinski, D.; Farbman, Z.; Uyttendaele, M.; Szeliski, R. Interactive local adjustment of tonal values. ACM Transactions on Graphics Vol. 25, No. 3. 646–653, 2006.

  23. [23]

    An, X.; Pellacini, F. AppProp: All-pairs appearancespace edit propagation. ACM Transactions on Graphics Vol. 27, No. 3, Article No.40, 2008.

  24. [24]

    Chen, X.; Zou, D.; Zhao, Q.; Tan, P. Manifold preserving edit propagation. ACM Transactions on Graphics Vol. 31, No. 6, Article No.132, 2012.

  25. [25]

    Wu, C. VisualSFM: A visual structure from motion system. 2011. Available at

  26. [26]

    Wu, C.; Agarwal, S.; Curless, B.; Seitz, S. M. Multicore bundle adjustment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3057–3064, 2011.

  27. [27]

    Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 32, No. 8, 1362–1376, 2010.

  28. [28]

    Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In: Proceedings of the 6th International Conference on Computer Vision, 839–846, 1998.

  29. [29]

    Laffont, P.-Y.; Bazin, J.-C. Intrinsic decomposition of image sequences from local temporal variations. In: Proceedings of the IEEE International Conference on Computer Vision, 433–441, 2015.

  30. [30]

    Roberts, D. A. Pixelstruct, an opensource tool for visualizing 3D scenes reconstructed from photographs. 2009. Available at

Download references


We would like to thank all reviewers for their comments and suggestions. The first author carried out the earlier phase of the research at the National University of Singapore with support from the School of Computing. This research is supported by the BeingThere Centre, a collaboration between Nanyang Technological University Singapore, Eidgenössische Technische Hochschule Zörich, and the University of North Carolina at Chapel Hill. The BeingThere Centre is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and is administered by the Interactive Digital Media Programme Office.

Author information

Correspondence to Qian Zhang.

Additional information

This article is published with open access at

Qian Zhang is a research assistant at Nanyang Technological University, Singapore. Her research interests include image processing, computational photography, and image-based rendering. Qian Zhang has her B.S. degree in electronics and information engineering from Huazhong University of Science and Technology, China.

Pierre-Yves Laffont is the CEO and co-founder of Lemnis Technologies. During this research, he was a postdoctoral researcher at ETH Zurich and a visiting researcher at Nanyang Technological University. His research interests include intrinsic image decomposition, example-based appearance transfer, and image-based rendering and relighting. He has his Ph.D. degree in computer science from Inria Sophia-Antipolis.

Terence Sim is an associate professor at the School of Computing, National University of Singapore. He is also an assistant dean of corporate relations at the School. For research, Dr. Sim works primarily in the areas of facial image analysis, biometrics, and computational photography. He is also interested in computer vision problems in general, such as shapefrom- shading, photometric stereo, and object recognition. From 2014 to 2016, Dr. Sim served as president of the Pattern Recognition and Machine Intelligence Association (PREMIA), a national professional body for pattern recognition, affiliated with the International Association for Pattern Recognition (IAPR).

Electronic supplementary material

Rights and permissions

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from To submit a manuscript, please go to

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Laffont, P. & Sim, T. Lighting transfer across multiple views through local color transforms. Comp. Visual Media 3, 315–324 (2017) doi:10.1007/s41095-017-0085-5

Download citation


  • Relighting
  • Photo Collection
  • Time-Lapse
  • Image Editing