Computational Visual Media

, Volume 3, Issue 4, pp 315–324 | Cite as

Lighting transfer across multiple views through local color transforms

  • Qian Zhang
  • Pierre-Yves Laffont
  • Terence Sim
Open Access
Research Article


We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions. We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences.


Relighting Photo Collection Time-Lapse Image Editing 



We would like to thank all reviewers for their comments and suggestions. The first author carried out the earlier phase of the research at the National University of Singapore with support from the School of Computing. This research is supported by the BeingThere Centre, a collaboration between Nanyang Technological University Singapore, Eidgenössische Technische Hochschule Zörich, and the University of North Carolina at Chapel Hill. The BeingThere Centre is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and is administered by the Interactive Digital Media Programme Office.

Supplementary material

41095_2017_85_MOESM1_ESM.pdf (1.9 mb)
Lighting transfer across multiple views through local color transforms

Supplementary material, approximately 27 MB.


  1. [1]
    Pouli, T.; Reinhard, E. Progressive color transfer for images of arbitrary dynamic range. Computers & Graphics Vol. 35, No. 1, 67–80, 2011.CrossRefGoogle Scholar
  2. [2]
    Pitie, F.; Kokaram, A. C.; Dahyot, R. Ndimensional probability density function transfer and its application to color transfer. In: Proceedings of the 10th IEEE International Conference on Computer Vision, Vol. 2, 1434–1439, 2005.Google Scholar
  3. [3]
    Reinhard, E.; Ashikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Computer Graphics and Applications Vol. 21, No. 5, 34–41, 2001.CrossRefGoogle Scholar
  4. [4]
    Huang, H.-Z.; Zhang, S.-H.; Martin, R. R.; Hu, S.-M. Learning natural colors for image recoloring. Computer Graphics Forum Vol. 33, No. 7, 299–308, 2014.CrossRefGoogle Scholar
  5. [5]
    Li, X.; Zhao, H.; Nie, G.; Huang, H. Image recoloring using geodesic distance based color harmonization. Computational Visual Media Vol. 1, No. 2, 143–155, 2015.CrossRefGoogle Scholar
  6. [6]
    Luan, F.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. arXiv preprint arXiv:1703.07511, 2017.Google Scholar
  7. [7]
    Park, J.; Tai, Y.-W.; Sinha, S. N.; Kweon, I. S. Efficient and robust color consistency for community photo collections. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 430–438, 2016.Google Scholar
  8. [8]
    Ye, S.; Lu, S.-P.; Munteanu, A. Color correction for large-baseline multiview video. Signal Processing: Image Communication Vol. 53, 40–50, 2017.Google Scholar
  9. [9]
    Lu, S.-P.; Ceulemans, B.; Munteanu, A.; Schelkens, P. Spatio-temporally consistent color and structure optimization for multiview video color correction. IEEE Transactions on Multimedia Vol. 17, No. 5, 577–590, 2015.CrossRefGoogle Scholar
  10. [10]
    Yu, Y.; Debevec, P.; Malik, J.; Hawkins, T. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 215–224, 1999.Google Scholar
  11. [11]
    Debevec, P.; Tchou, C.; Gardner, A.; Hawkins, T.; Poullis, C.; Stumpfel, J.; Jones, A.; Yun, N.; Einarsson, P.; Lundgren, T.; Fajardo, M.; Martinez, P. Estimating surface reactance properties of a complex scene under captured natural illumination. USC ICT Technical Report ICT-TR-06. 2004.Google Scholar
  12. [12]
    Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen- Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Transactions on Graphics Vol. 27, No. 5, Article No.116, 2008.Google Scholar
  13. [13]
    Yu, Y.; Malik, J. Recovering photometric properties of architectural scenes from photographs. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 207–217, 1998.Google Scholar
  14. [14]
    Laffont, P.-Y.; Bousseau, A.; Paris, S.; Durand, F.; Drettakis, G. Coherent intrinsic images from photo collections. ACM Transactions on Graphics Vol. 31, No. 6, Article No.202, 2012.Google Scholar
  15. [15]
    HaCohen, Y.; Shechtman, E.; Goldman, D. B.; Lischinski, D. Non-rigid dense correspondence with applications for image enhancement. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 70, 2011.Google Scholar
  16. [16]
    Shih, Y.; Paris, S.; Durand, F.; Freema, W. T. Datadriven hallucination of different times of day from a single outdoor photo. ACM Transactions on Graphics Vol. 32, No. 6, Article No.200, 2013.Google Scholar
  17. [17]
    Laffont, P.-Y.; Ren, Z.; Tao, X.; Qian, C.; Hays, J. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Transactions on Graphics Vol. 33, No. 4, Article No.145, 2014.Google Scholar
  18. [18]
    Martin-Brualla, R.; Gallup, D.; Seitz, S. M. Timelapse mining from internet photos. ACM Transactions on Graphics Vol. 34, No. 4, Article No.62, 2015.Google Scholar
  19. [19]
    Shen, X.; Tao, X.; Zhou, C.; Gao, H.; Jia, J. Regional foremost matching for internet scene images. ACM Transactions on Graphics Vol. 35, No. 6, Article No.178, 2016.Google Scholar
  20. [20]
    Levin, A.; Lischinski, D.; Weiss, Y. Colorization using optimization. ACM Transactions on Graphics Vol. 23, No. 3, 689–694, 2004.CrossRefGoogle Scholar
  21. [21]
    Liu, X.; Wan, L.; Qu, Y.; Wong, T.-T.; Lin, S.; Leung, C.-S.; Heng, P.-A. Intrinsic colorization. ACM Transactions on Graphics Vol. 27, No. 5, Article No.152, 2008.Google Scholar
  22. [22]
    Lischinski, D.; Farbman, Z.; Uyttendaele, M.; Szeliski, R. Interactive local adjustment of tonal values. ACM Transactions on Graphics Vol. 25, No. 3. 646–653, 2006.CrossRefGoogle Scholar
  23. [23]
    An, X.; Pellacini, F. AppProp: All-pairs appearancespace edit propagation. ACM Transactions on Graphics Vol. 27, No. 3, Article No.40, 2008.Google Scholar
  24. [24]
    Chen, X.; Zou, D.; Zhao, Q.; Tan, P. Manifold preserving edit propagation. ACM Transactions on Graphics Vol. 31, No. 6, Article No.132, 2012.Google Scholar
  25. [25]
    Wu, C. VisualSFM: A visual structure from motion system. 2011. Available at Scholar
  26. [26]
    Wu, C.; Agarwal, S.; Curless, B.; Seitz, S. M. Multicore bundle adjustment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3057–3064, 2011.Google Scholar
  27. [27]
    Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 32, No. 8, 1362–1376, 2010.CrossRefGoogle Scholar
  28. [28]
    Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In: Proceedings of the 6th International Conference on Computer Vision, 839–846, 1998.Google Scholar
  29. [29]
    Laffont, P.-Y.; Bazin, J.-C. Intrinsic decomposition of image sequences from local temporal variations. In: Proceedings of the IEEE International Conference on Computer Vision, 433–441, 2015.Google Scholar
  30. [30]
    Roberts, D. A. Pixelstruct, an opensource tool for visualizing 3D scenes reconstructed from photographs. 2009. Available at Scholar

Copyright information

© The Author(s) 2017

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from To submit a manuscript, please go to

Authors and Affiliations

  1. 1.Nanyang Technological UniversitySingaporeSingapore
  2. 2.ETH ZurichZurichSwitzerland
  3. 3.National University of SingaporeSingaporeSingapore

Personalised recommendations