Advertisement

Filter Style Transfer Between Photos

Conference paper
  • 778 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12351)

Abstract

Over the past few years, image-to-image style transfer has risen to the frontiers of neural image processing. While conventional methods were successful in various tasks such as color and texture transfer between images, none could effectively work with the custom filter effects that are applied by users through various platforms like Instagram. In this paper, we introduce a new concept of style transfer, Filter Style Transfer (FST). Unlike conventional style transfer, new technique FST can extract and transfer custom filter style from a filtered style image to a content image. FST first infers the original image from a filtered reference via image-to-image translation. Then it estimates filter parameters from the difference between them. To resolve the ill-posed nature of reconstructing the original image from the reference, we represent each pixel color of an image to class mean and deviation. Besides, to handle the intra-class color variation, we propose an uncertainty based weighted least square method for restoring an original image. To the best of our knowledge, FST is the first style transfer method that can transfer custom filter effects between FHD image under 2 ms on a mobile device without any textual context loss.

Keywords

Photorealistic style transfer Filter style transfer Image-to-image translation 

Supplementary material

504443_1_En_7_MOESM1_ESM.pdf (118 kb)
Supplementary material 1 (pdf 118 KB)

References

  1. 1.
    Bianco, S., Cusano, C., Piccoli, F., Schettini, R.: Content-preserving tone adjustment for image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)Google Scholar
  2. 2.
    Bianco, S., Cusano, C., Piccoli, F., Schettini, R.: Learning parametric functions for color image enhancement. In: Tominaga, S., Schettini, R., Trémeau, A., Horiuchi, T. (eds.) CCIW 2019. LNCS, vol. 11418, pp. 209–220. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-13940-7_16CrossRefGoogle Scholar
  3. 3.
    Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: CVPR 2011, pp. 97–104. IEEE (2011)Google Scholar
  4. 4.
    Chandakkar, P.S., Li, B.: Joint regression and ranking for image enhancement. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 235–243. IEEE (2017)Google Scholar
  5. 5.
    Chen, Y.S., Wang, Y.C., Kao, M.H., Chuang, Y.Y.: Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6306–6314 (2018)Google Scholar
  6. 6.
    Gal, Y., Islam, R., Ghahramani, Z.: Deep Bayesian active learning with image data. In: Proceedings of the 34th International Conference on Machine Learning-Volume, vol. 70, pp. 1183–1192. JMLR. org (2017)Google Scholar
  7. 7.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)Google Scholar
  8. 8.
    Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36(4), 1–12 (2017)CrossRefGoogle Scholar
  9. 9.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  10. 10.
    Hu, Y., He, H., Xu, C., Wang, B., Lin, S.: Exposure: a white-box photo post-processing framework. ACM Trans. Graph. (TOG) 37(2), 1–17 (2018)CrossRefGoogle Scholar
  11. 11.
    Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: DSLR-quality photos on mobile devices with deep convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3277–3285 (2017)Google Scholar
  12. 12.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)Google Scholar
  13. 13.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  14. 14.
    Kang, S.B., Kapoor, A., Lischinski, D.: Personalization of image enhancement. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1799–1806. IEEE (2010)Google Scholar
  15. 15.
    Kaufman, L., Lischinski, D., Werman, M.: Content-aware automatic photo enhancement. Comput. Graph. Forum 31(8), 2528–2540 (2012). Wiley Online LibraryCrossRefGoogle Scholar
  16. 16.
    Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, pp. 5574–5584 (2017)Google Scholar
  17. 17.
    Kravets, U.: CSSGram (2016). https://github.com/una/CSSgram
  18. 18.
    Li, Y., Liu, M.Y., Li, X., Yang, M.H., Kautz, J.: A closed-form solution to photorealistic image stylization. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 453–468 (2018)Google Scholar
  19. 19.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  20. 20.
    Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems, pp. 700–708 (2017)Google Scholar
  21. 21.
    Luan, F., Paris, S., Shechtman, E., Bala, K.: Deep photo style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4990–4998 (2017)Google Scholar
  22. 22.
    Omiya, M., Simo-Serra, E., Iizuka, S., Ishikawa, H.: Learning photo enhancement by black-box model optimization data generation. In: SIGGRAPH Asia 2018 Technical Briefs, p. 7. ACM (2018)Google Scholar
  23. 23.
    Pitié, F., Kokaram, A.C., Dahyot, R.: Automated colour grading using colour distribution transfer. Comput. Vis. Image Underst. 107(1–2), 123–137 (2007)CrossRefGoogle Scholar
  24. 24.
    Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)CrossRefGoogle Scholar
  25. 25.
    Sheng, L., Lin, Z., Shao, J., Wang, X.: Avatar-Net: multi-scale zero-shot style transfer by feature decoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8242–8250 (2018)Google Scholar
  26. 26.
    Tai, Y.W., Jia, J., Tang, C.K.: Local color transfer via probabilistic segmentation by expectation-maximization. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 747–754. IEEE (2005)Google Scholar
  27. 27.
    Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 277–280 (2002)Google Scholar
  28. 28.
    Xiao, X., Ma, L.: Color transfer in correlated color space. In: Proceedings of the 2006 ACM International Conference on Virtual Reality Continuum and Its Applications, pp. 305–309 (2006)Google Scholar
  29. 29.
    Yan, J., Lin, S., Bing Kang, S., Tang, X.: A learning-to-rank approach for image color enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2987–2994 (2014)Google Scholar
  30. 30.
    Yan, Z., Zhang, H., Wang, B., Paris, S., Yu, Y.: Automatic photo adjustment using deep neural networks. ACM Trans. Graph. (TOG) 35(2), 1–15 (2016)CrossRefGoogle Scholar
  31. 31.
    Yang, H., Wang, B., Vesdapunt, N., Guo, M., Kang, S.B.: Personalized exposure control using adaptive metering and reinforcement learning. IEEE Trans. Vis. Comput. Graph. 25(10), 2953–2968 (2018)CrossRefGoogle Scholar
  32. 32.
    Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9036–9045 (2019)Google Scholar
  33. 33.
    Yu, R., Liu, W., Zhang, Y., Qu, Z., Zhao, D., Zhang, B.: DeepExposure: learning to expose photos with asynchronously reinforced adversarial learning. In: Advances in Neural Information Processing Systems, pp. 2149–2159 (2018)Google Scholar
  34. 34.
    Yuan, L., Sun, J.: Automatic exposure correction of consumer photographs. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 771–785. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33765-9_55CrossRefGoogle Scholar
  35. 35.
    Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: Advances in Neural Information Processing Systems, pp. 465–476 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Visual Solution Lab.Samsung ElectronicsSuwonSouth Korea
  2. 2.Visual Solution Lab.Samsung ElectronicsSeoulSouth Korea
  3. 3.Visual Solution Lab.Samsung ElectronicsBucheonSouth Korea

Personalised recommendations