Lighting transfer across multiple views through local color transforms
We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions. We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences.
KeywordsRelighting Photo Collection Time-Lapse Image Editing
We would like to thank all reviewers for their comments and suggestions. The first author carried out the earlier phase of the research at the National University of Singapore with support from the School of Computing. This research is supported by the BeingThere Centre, a collaboration between Nanyang Technological University Singapore, Eidgenössische Technische Hochschule Zörich, and the University of North Carolina at Chapel Hill. The BeingThere Centre is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and is administered by the Interactive Digital Media Programme Office.
Supplementary material, approximately 27 MB.
- Pitie, F.; Kokaram, A. C.; Dahyot, R. Ndimensional probability density function transfer and its application to color transfer. In: Proceedings of the 10th IEEE International Conference on Computer Vision, Vol. 2, 1434–1439, 2005.Google Scholar
- Luan, F.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. arXiv preprint arXiv:1703.07511, 2017.Google Scholar
- Park, J.; Tai, Y.-W.; Sinha, S. N.; Kweon, I. S. Efficient and robust color consistency for community photo collections. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 430–438, 2016.Google Scholar
- Ye, S.; Lu, S.-P.; Munteanu, A. Color correction for large-baseline multiview video. Signal Processing: Image Communication Vol. 53, 40–50, 2017.Google Scholar
- Yu, Y.; Debevec, P.; Malik, J.; Hawkins, T. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 215–224, 1999.Google Scholar
- Debevec, P.; Tchou, C.; Gardner, A.; Hawkins, T.; Poullis, C.; Stumpfel, J.; Jones, A.; Yun, N.; Einarsson, P.; Lundgren, T.; Fajardo, M.; Martinez, P. Estimating surface reactance properties of a complex scene under captured natural illumination. USC ICT Technical Report ICT-TR-06. 2004.Google Scholar
- Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen- Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Transactions on Graphics Vol. 27, No. 5, Article No.116, 2008.Google Scholar
- Yu, Y.; Malik, J. Recovering photometric properties of architectural scenes from photographs. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 207–217, 1998.Google Scholar
- Laffont, P.-Y.; Bousseau, A.; Paris, S.; Durand, F.; Drettakis, G. Coherent intrinsic images from photo collections. ACM Transactions on Graphics Vol. 31, No. 6, Article No.202, 2012.Google Scholar
- HaCohen, Y.; Shechtman, E.; Goldman, D. B.; Lischinski, D. Non-rigid dense correspondence with applications for image enhancement. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 70, 2011.Google Scholar
- Shih, Y.; Paris, S.; Durand, F.; Freema, W. T. Datadriven hallucination of different times of day from a single outdoor photo. ACM Transactions on Graphics Vol. 32, No. 6, Article No.200, 2013.Google Scholar
- Laffont, P.-Y.; Ren, Z.; Tao, X.; Qian, C.; Hays, J. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Transactions on Graphics Vol. 33, No. 4, Article No.145, 2014.Google Scholar
- Martin-Brualla, R.; Gallup, D.; Seitz, S. M. Timelapse mining from internet photos. ACM Transactions on Graphics Vol. 34, No. 4, Article No.62, 2015.Google Scholar
- Shen, X.; Tao, X.; Zhou, C.; Gao, H.; Jia, J. Regional foremost matching for internet scene images. ACM Transactions on Graphics Vol. 35, No. 6, Article No.178, 2016.Google Scholar
- Liu, X.; Wan, L.; Qu, Y.; Wong, T.-T.; Lin, S.; Leung, C.-S.; Heng, P.-A. Intrinsic colorization. ACM Transactions on Graphics Vol. 27, No. 5, Article No.152, 2008.Google Scholar
- An, X.; Pellacini, F. AppProp: All-pairs appearancespace edit propagation. ACM Transactions on Graphics Vol. 27, No. 3, Article No.40, 2008.Google Scholar
- Chen, X.; Zou, D.; Zhao, Q.; Tan, P. Manifold preserving edit propagation. ACM Transactions on Graphics Vol. 31, No. 6, Article No.132, 2012.Google Scholar
- Wu, C. VisualSFM: A visual structure from motion system. 2011. Available at http://ccwu.me/vsfm/.Google Scholar
- Wu, C.; Agarwal, S.; Curless, B.; Seitz, S. M. Multicore bundle adjustment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3057–3064, 2011.Google Scholar
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In: Proceedings of the 6th International Conference on Computer Vision, 839–846, 1998.Google Scholar
- Laffont, P.-Y.; Bazin, J.-C. Intrinsic decomposition of image sequences from local temporal variations. In: Proceedings of the IEEE International Conference on Computer Vision, 433–441, 2015.Google Scholar
- Roberts, D. A. Pixelstruct, an opensource tool for visualizing 3D scenes reconstructed from photographs. 2009. Available at https://github.com/davidar/pixelstruct.Google Scholar
Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.