Skip to main content
Log in

Neural style transfer based on deep feature synthesis

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Neural Style Transfer makes full use of the high-level features of deep neural networks, so stylized images can represent content and style features on high-level semantics. But neural networks are end-to-end black box systems. Previous style transfer models are based on the overall features of the image when constructing the target image, so they cannot effectively intervene in the content and style representations. This paper presents a locally controllable nonparametric neural style transfer model. We treat style transfer as a feature matching process independent of neural networks and propose a deep-to-shallow feature synthesis algorithm. The target feature map is synthesized layer by layer in the deep feature space and then transformed into the target image. Because the feature synthesis is a local manipulation on feature maps, it is easy to control the local texture structure, content details and texture distribution. Based on our synthesis algorithm, we propose a multi-exemplar synthesis method that can make local stroke directions better match content semantics or combine multiple styles into a single image. Our experiments show that our model can produce more impressive results than previous methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Ashikhmin, N.: Fast texture transfer. IEEE Comput. Graphics Appl. 23(4), 38–43 (2003)

    Article  Google Scholar 

  2. Efros, A.A., and Freeman, W.T.: Image quilting for texture synthesis and transfer. In: Proceedings of ACM Conf. Computer Graphics and Interactive Techniques (SIGGRAPH) (2001)

  3. Elad, M., Milanfar, P.: Style transfer via texture synthesis. IEEE Trans. Image Process. 26(5), 2338–2351 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Frigo O. et al.: Split and Match: Example-based adaptive patch sampling for unsupervised style transfer. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 553–561 (2016)

  5. Wang, M., et al.: Towards photo watercolorization with artistic verisimilitude. IEEE Trans. Visual Comput. Graphics 20(10), 1451–1460 (2014)

    Article  Google Scholar 

  6. Kolliopoulos A. et al.: Segmentation-based 3D artistic rendering. In: Proceedings of Eurographics Symposium on Rendering, 361–370 (2006)

  7. Winnemller, H.: XDoG: Advanced image stylization with extended Difference-of-Gaussians. In: Proceedings of Non-Photorealistic Animation and Rendering (NPAR), (2011)

  8. Gao, J., Li, D., Gao, W.: Oil painting style rendering based on Kuwahara filter. IEEE Access 7, 104168–104178 (2019)

    Article  Google Scholar 

  9. Hertzmann, A.: A survey of stroke-based rendering. IEEE Comput. Graphics Appl. 23(4), 70–81 (2003)

    Article  Google Scholar 

  10. Zeng K. et al. From image parsing to painterly rendering. ACM Trans. Graphics, 29(1), Article 2 (2009)

  11. Dong, L., et al.: Real-time image-based Chinese ink painting rendering. Multimedia Tools Appl. 69(3), 605–620 (2014)

    Article  MathSciNet  Google Scholar 

  12. Gatys L.A. et al.: Image style transfer using convolutional neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2414–2423 (2016)

  13. Johnson J. et al.: Perceptual losses for real-time style transfer and super resolution. In: Proceedings of European Conference on Computer Vision, 694–711 (2016)

  14. Wang X. et al.: Multimodal transfer: a hierarchical deep convolutional neural network for fast artistic style transfer. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7178–7186 (2017)

  15. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of IEEE International Conference on Computer Vision (ICCV), Venice, 1510–1519 (2017)

  16. Li, C., Wand, M.: Combining markov random fields and convolutional neural networks for image synthesis. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2479–2486 (2016)

  17. Li et al. S. Laplacian-steered neural style transfer. In Proceedings of ACM on Multimedia Conference, 1716–1724 (2017)

  18. Cheng, M., et al.: Structure-preserving neural style transfer. IEEE Trans. Image Process. 29, 909–920 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gatys L.A. et al. Controlling perceptual factors in neural style transfer. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3730–3738 (2017)

  20. Yamaguchi S. et al.: Region-based painting style transfer. ACM SIGGRAPH Asia 2015 Technical Briefs, Article no. 8 (2015)

  21. Fišer J. et al.: Example-based synthesis of stylized facial animations. ACM Trans. Graph., 36(4), Article 155 (2017)

  22. Lee, H., et al.: Directional texture transfer with edge enhancement. Comput. Graph. 35(1), 81–94 (2011)

    Article  MathSciNet  Google Scholar 

  23. Wang, B., et al.: Efficient example-based painting and synthesis of 2D directional texture. IEEE Trans. Visual Comput. Graphics 10(3), 266–277 (2004)

    Article  Google Scholar 

  24. Frigo, O., et al.: Video style transfer by consistent adaptive patch sampling. Vis. Comput. 35(3), 429–443 (2019)

    Article  Google Scholar 

  25. Hertzmann, A. et al.: Image analogies. In: Proceedings of ACM Conf. Computer Graphics and Interactive Techniques (SIGGRAPH), 327–340 (2001)

  26. Barnes C. et al. PatchTable: Efficient patch queries for large datasets and applications. ACM Trans. Graph., 34(4), Article 97 (2015)

  27. Wang, G., et al.: Deringing cartoons by image analogies. ACM Trans. Graph. 25(4), 1360–1379 (2006)

    Article  Google Scholar 

  28. Zhang, W., et al.: Style transfer via image component analysis. IEEE Trans. Multimedia 15(7), 1594–1601 (2013)

    Article  Google Scholar 

  29. Bénard, P. et al. (2013) Stylizing animation by example. ACM Trans. Graphics, 32(4), Article 119

  30. Fišer J. et al.: StyLit: Illumination-guided example-based stylization of 3D renderings. ACM Trans. Graphics, 35(4), Article 2, 2016)

  31. Ondřej Jamriška, et al. Stylizing video by example. ACM Transactions on Graphics, 38(4), (2019)

  32. Champandard, A.J.: Semantic style transfer and turning two-bit doodles into fine artworks. 2016, arXiv:1603.01768, [Online].Available: https://arxiv.org/abs/1603.01768

  33. Gu, S. et al.: Arbitrary style transfer with deep feature reshuffle. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 8222–8231 (2018)

  34. Kolkin, N., Salavon, J., Shakhnarovich, G.: Style transfer by relaxed optimal transport and self-similarity. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) IEEE (2019)

  35. Ye, W., Zhu, X., Liu, Y.: Multi-semantic preserving neural style transfer based on Y channel information of image. Visual Computer, 1–15. (2022)

  36. Risser, E., et al.: Stable and controllable neural texture synthesis and style transfer using histogram losses. (2017), arXiv:1701.08893, [Online]. Available: https://arxiv.org/abs/1701.08893

  37. Huang, Z., et al.: Style mixer: semantic-aware multi-style transfer network. Computer Graphics Forum 38(7), 469–480 (2019)

    Article  Google Scholar 

  38. Zhang, Y., et al.: A unified framework for generalizable style transfer: style and content separation. IEEE Trans. Image Process. 29, 4085–4098 (2020)

    Article  MATH  Google Scholar 

  39. Liao et al. J.: Visual attribute transfer through deep image analogy. ACM Trans. Graph., 36(4) Article 120 (2017)

  40. Zhao, H.H., Zheng, J.H., Wang, Y.N., et al.: Portrait style transfer using deep convolutional neural networks and facial segmentation. Comput. Electr. Eng. 85, 106655 (2020)

    Article  Google Scholar 

  41. Zhao, H.H., Rosin, P.L., Lai, Y.K., et al.: Automatic semantic style transfer using deep convolutional neural networks and soft masks. Vis. Comput. 36, 1307–1324 (2020)

    Article  Google Scholar 

  42. Jing, Y. et al.: Stroke controllable fast style transfer with adaptive receptive fields. In: Proceedings of European Conference on Computer Vision, 244–260 (2018)

  43. Reimann, M., Buchheim, B., Semmo, A. et al.: Controlling strokes in fast neural style transfer using content transforms. Visual Computer (2022)

  44. Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vision 120, 233–255 (2016)

    Article  MathSciNet  Google Scholar 

  45. Ulyanov, D. et al.: Texture networks: feed-forward synthesis of textures and stylized images. In: Proceedings of Int. Conference on Machine Learning (ICML), 1349–1357 (2016)

  46. Zhu, J., Park, T., Isola, P., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017)

  47. Sketch to portrait generation with generative adversarial networks and edge constraint. Comput. Electr. Eng., 95(10), 107338 (2021)

  48. Chen, Y., Lai, Y.K., Liu, Y.J.: CartoonGAN: Generative adversarial networks for photo cartoonization. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9465–9474 (2018)

  49. Dumoulin V. et al.: A learned representation for artistic style. In: Proceedings of International Conference on Learning Representations (ICLR) (2017)

  50. Chen, X., Xu, C., Yang, X., et al.: Gated-GAN: Adversarial gated networks for multi-collection style transfer. IEEE Trans. Image Process. 28(2), 546–560 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  51. Chen, D., Yuan, L., Liao, J., et al.: Explicit filterbank learning for neural image style transfer and image processing. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2373–2387 (2021)

    Article  Google Scholar 

  52. Zhang, S., Su, S., Li, L., et al.: CSST-Net: an arbitrary image style transfer network of coverless steganography. Visusal Computer 38, 2125–2137 (2022)

    Article  Google Scholar 

  53. Li Y. et al.: Universal style transfer via feature transforms. In: Proceedings of Conference and Workshop on Neural Information Processing Systems (2017)

  54. Mahendran, A., Vedaldi A.: Understanding deep image representations by inverting them. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5188–5196 (2015)

  55. Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4829–4837 (2016)

  56. Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In Proceedings of Int. Conf. Learn. Represent. 1–14 (2015)

  57. Ashikhmin, M.: Synthesizing natural textures. In: Proceedings of. Symposium on Interactive 3D graphics, 217–226 (2001)

  58. Kwatra, V., et al.: Graphcut textures: Image and video synthesis using graph cuts. ACM Trans. Graphics 22(3), 277–286 (2003)

    Article  Google Scholar 

  59. Efros, A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: Proceedings of ACM Conf. Computer Graphics and Interactive Techniques (SIGGRAPH), 341–346

  60. Jing Y. et al.: Neural style transfer: A Review. IEEE Trans. Visual. Comput. Graph., (2019)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Project No. 61340019).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dajin Li.

Ethics declarations

Conflict of interest

Dajin Li declares that he has no conflict of interest. Wenran Gao declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, D., Gao, W. Neural style transfer based on deep feature synthesis. Vis Comput 39, 5359–5373 (2023). https://doi.org/10.1007/s00371-022-02664-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02664-2

Keywords

Navigation