Towards a General Framework for Artistic Style Transfer

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10783)

Abstract

In recent times, artificial intelligence has become more sophisticated when it comes to the creation of fine arts. Especially in the area of painting, artificial methods reached a new level of maturity in the process of replicating perceptual quality. These systems are able to separate style and content of given images, enabling them to recombine and mutate the facets to create novel content. This work defines a general framework for conducting artistic style transfer. This allows recombination and structured modification of state of the art algorithms for further investigation and profiling of artistic style transfer.

References

  1. 1.
    Stork, D.G.: Computer vision and computer graphics analysis of paintings and drawings: an introduction to the literature. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS, vol. 5702, pp. 9–24. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03767-2_2 CrossRefGoogle Scholar
  2. 2.
    Lombardi, T.E.: The classification of style in fine-art painting (2005). aAI3189084Google Scholar
  3. 3.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)Google Scholar
  4. 4.
    Novak, R., Nikulin, Y.: Improving the neural algorithm of artistic style. CoRR abs/1605.04603 (2016). http://arxiv.org/abs/1605.04603
  5. 5.
    Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. CoRR abs/1603.08155 (2016). http://arxiv.org/abs/1603.08155
  6. 6.
    Ashikhmin, M.: Synthesizing natural textures. In: Proceedings of the 2001 Symposium on Interactive 3D Graphics, pp. 217–226 (2001)Google Scholar
  7. 7.
    Lefebvre, S., Hoppe, H.: Parallel controllable texture synthesis. ACM Trans. Graph. 24(3), 777 (2005)CrossRefGoogle Scholar
  8. 8.
    Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer, pp. 341–346 (2001). http://doi.acm.org/10.1145/383259.383296
  9. 9.
    Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, pp. 327–340 (2001)Google Scholar
  10. 10.
    Freeman, W.T., Jones, T.R., Pasztor, E.C.: Example-based super-resolution. IEEE Comput. Graph. Appl. 22(2), 56–65 (2002)CrossRefGoogle Scholar
  11. 11.
    Kwatra, V., Essa, I., Bobick, A., Kwatra, N.: Texture optimization for example-based synthesis. ACM Trans. Graph. 24(3), 795 (2005)CrossRefGoogle Scholar
  12. 12.
    Elad, M., Milanfar, P.: Style-transfer via texture-synthesis. CoRR abs/1609.03057 (2016). http://arxiv.org/abs/1609.03057
  13. 13.
    Frigo, O., Sabater, N., Delon, J., Hellier, P.: Split and match: example-based adaptive patch sampling for unsupervised style transfer. In: CVPR 2016, pp. 553–561 (2016)Google Scholar
  14. 14.
    Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR abs/1508.06576 (2015). http://arxiv.org/abs/1508.06576
  15. 15.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014). http://arxiv.org/abs/1409.1556
  16. 16.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. CoRR abs/1505.07376 (2015). http://arxiv.org/abs/1505.07376
  17. 17.
    Li, C., Wand, M.: Combining markov random fields and convolutional neural networks for image synthesis. In: CVPR 2016, p. 9 (2016)Google Scholar
  18. 18.
    Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. CoRR abs/1604.04382 (2016). http://arxiv.org/abs/1604.04382
  19. 19.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Instance normalization: the missing ingredient for fast stylization. CoRR abs/1607.08022 (2016). http://arxiv.org/abs/1607.08022
  20. 20.
    Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. CoRR abs/1412.0035 (2014). http://arxiv.org/abs/1412.0035
  21. 21.
    Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-BFGS-B: fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw. 23(4), 550–560 (1997)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    LeCun, Y., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backprop. In: Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998).  https://doi.org/10.1007/3-540-49430-8_2 CrossRefGoogle Scholar
  23. 23.
    Rother, C., Kolmogorov, V., Blake, A.: “GrabCut” - interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309 (2004)CrossRefGoogle Scholar
  24. 24.
    Kwatra, V., Schödl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures. ACM SIGGRAPH 2003 Papers on - SIGGRAPH 2003 22(3), 277 (2003)Google Scholar
  25. 25.
    O’Leary, D.P.: Robust regression computation using iteratively reweighted least squares. SIAM J. Matrix Anal. Appl. 11(3), 466–480 (1990)MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Coleman, D., Holland, P., Kaden, N., Klema, V., Peters, S.C.: A system of subroutines for iteratively reweighted least squares computations. ACM Trans. Math. Softw. 6(3), 327–336 (1980)CrossRefMATHGoogle Scholar
  27. 27.
    Pitie, F., Kokaram, A.: The linear Monge-Kantorovitch linear colour mapping for example-based colour transfer. In: IET 4th European Conference on Visual Media Production (CVMP 2007), p. 23 (2007)Google Scholar
  28. 28.
    Faridul, H.S., Pouli, T., Chamaret, C., Stauder, J., Reinhard, E., Kuzovkin, D., Tremeau, A.: Colour mapping: a review of recent methods. Extensions Appl. Comput. Graph. Forum 35(1), 59–88 (2016)CrossRefGoogle Scholar
  29. 29.
    Efros, A., Leung, T.: Texture synthesis by non-parametric sampling. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 1033–1038, September 1999Google Scholar
  30. 30.
    Broyden, C.G., Dennis, J.E., Moré, J.J.: On the local and superlinear convergence of quasi-newton methods. IMA J. Appl. Math. 12(3), 223–245 (1973)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1), 503–528 (1989).  https://doi.org/10.1007/BF01589116 MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: feed-forward synthesis of textures and stylized images. CoRR abs/1603.03417 (2016). http://arxiv.org/abs/1603.03417
  33. 33.
    Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. CoRR abs/1612.04337 (2016). http://arxiv.org/abs/1612.04337
  34. 34.
    Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593 (2017). http://arxiv.org/abs/1703.10593

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Volkswagen AGWolfsburgGermany
  2. 2.Faculty of Computer ScienceUniversity of MagdeburgMagdeburgGermany

Personalised recommendations