Machine Vision and Applications

, Volume 29, Issue 3, pp 503–512 | Cite as

Image-based pencil drawing synthesized using convolutional neural network feature maps

  • Xiuxia Cai
  • Bin Song
Original Paper


In most cases, the conventional pencil-drawing-synthesized methods were in terms of geometry and stroke, or only used classic edge detection method to extract image edge characters. In this paper, we propose a new method to produce pencil drawing from natural image. The synthesized result can not only generate pencil sketch drawing, but also can save the color tone of natural image and the drawing style is flexible. The sketch and style are learned from the edge of original natural image and one pencil image exemplar of artist’s work. They are accomplished through using the convolutional neural network feature maps of a natural image and an exemplar pencil drawing style image. Large-scale bound-constrained optimization (L-BFGS) is applied to synthesize the new pencil sketch whose style is similar to the exemplar pencil sketch. We evaluate the proposed method by applying it to different kinds of images and textures. Experimental results demonstrate that our method is better than conventional method in clarity and color tone. Besides, our method is also flexible in drawing style.


Deep learning Pencil sketch drawing Feature maps CNN 



We thank the anonymous reviewers and the editor for their valuable comments. This work has been supported by the National Natural Science Foundation of China (Nos. 61772387 and 61372068), the Research Fund for the Doctoral Program of Higher Education of China (No. 20130203110005), the Fundamental Research Funds for the Central Universities (No. K5051301033), the 111 Project (No. B08038) and also supported by the ISN State Key Laboratory.


  1. 1.
    Decarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella, A.: Suggestive contours for conveying shape. ACM Trans. Graph. 22(3), 848–855 (2010)CrossRefGoogle Scholar
  2. 2.
    Judd, T., Durand, F., Adelson, E.H.: Apparent ridges for line drawing. ACM Trans. Graph. 26(3), 19 (2007)CrossRefGoogle Scholar
  3. 3.
    Lee, Y., Markosian, L., Lee, S., Hughes, J.F.: Line drawings via abstracted shading. ACM Trans. Graph. 26(3), 18 (2007)CrossRefGoogle Scholar
  4. 4.
    Gao, X., Zhou, J., Chen, Z., Chen, Y.: Automatic generation of pencil sketch for 2D images. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pp. 1018–1021 (2010)Google Scholar
  5. 5.
    Hertzmann, A., Zorin, D.: Illustrating smooth surfaces. In: Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co. pp. 517–526 (2004)Google Scholar
  6. 6.
    Praun, E., Hoppe, H., Webb, M., Finkelstein A.: Real-time hatching. In: Proceedings of the ACM Siggraph, p. 581 (2004)Google Scholar
  7. 7.
    Lu, C., Xu, L., Jia, J.: Combining Sketch and Tone for Pencil Drawing Production, pp. 65–73. Eurographics Association, Geneve (2012)Google Scholar
  8. 8.
    Gatys, L.A., Ecker, A.S., Bethge, M.A.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)Google Scholar
  9. 9.
    Cai, X., Song, B.: Combining inconsistent textures using convolutional neural networks. J. Vis. Commun. Image Represent. 40, 366–375 (2016)CrossRefGoogle Scholar
  10. 10.
    Wang, N., Zhang, S., Gao, X., Song, B., Li, J., Li, Z.: Unified framework for face sketch synthesis. Signal Process. 130, 1–11 (2017)CrossRefGoogle Scholar
  11. 11.
    Xu, L., Lu, C., Xu, Y., Jia, J.: Image smoothing via L0 gradient minimization. ACM Trans. Graph. (TOG) 30(6), 61–64 (2011)Google Scholar
  12. 12.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp.1097–1105 (2012)Google Scholar
  14. 14.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Berg, A.C.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708 (2014)Google Scholar
  16. 16.
    Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the CVPR (2015)Google Scholar
  17. 17.
    Mostajabi, M., Yadollahpour, P., Shakhnarovich, G.: Feedforward semantic segmentation with zoom-out features. In: Proceedings of the CVPR (2015)Google Scholar
  18. 18.
    Arbelaez, P., Pont-Tuset, J., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 328–335 (2014)Google Scholar
  19. 19.
    Gatys, L.A., Ecker, A.S., Bethge, M.A.: Neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015)
  20. 20.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  21. 21.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR (2015)Google Scholar
  22. 22.
    Cadieu, C.F., Hong, H., Yamins, D.L.K.: Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Comput. Biol. 10(12), e1003963 (2014)CrossRefGoogle Scholar
  23. 23.
    Gl, U., van Gerven, M.A.J.: Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J. Neurosci. 35(27), 10005–10014 (2015)CrossRefGoogle Scholar
  24. 24.
    Yamins, D.L.K., Hong, H., Cadieu, C.F.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. 111(23), 8619–8624 (2014)CrossRefGoogle Scholar
  25. 25.
    Khaligh-Razavi, S.M., Kriegeskorte, N.: Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol. 10(11), e1003915 (2014)CrossRefGoogle Scholar
  26. 26.
    Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., Vedaldi, A.: Describing textures in the wild. In: Computer Vision and Pattern Recognition (CVPR), pp. 3606–3613 (2014)Google Scholar
  27. 27.
    Cimpoi, M, Maji, S., Vedaldi, A.: Deep filter banks for texture recognition and description. In: Proceedings of the CVPR (2015)Google Scholar
  28. 28.
    Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. IJCV 81(1), 24–52 (2013)CrossRefGoogle Scholar
  29. 29.
    Zhu, S., Ma, K.-K.: A new diamond search algorithm for fast block-matching motion estimation. IEEE Trans. Image Process. 9(2), 287–290 (2000)CrossRefGoogle Scholar
  30. 30.
    Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw. (TOMS) 23(4), 550–560 (1997)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, pp. 675–678 (2014)Google Scholar
  32. 32.
    Heeger, D.J., Bergen, J.R.: Pyramid-based texture analysis/synthesis. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 229–238. ACM (1995)Google Scholar
  33. 33.
    Portilla, J., Simoncelli, P.: A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40(1), 49–71 (2000)CrossRefMATHGoogle Scholar
  34. 34.
    Xie, X., Tian, F., Seah, H.S.: Feature guided texture synthesis (FGTS) for artistic style transfer. In: Proceedings of the 2nd International Conference on Digital Interactive Media in Entertainment and Arts, pp. 44–49 (2007)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.State Key Laboratory of Integrated Services NetworksXidian UniversityXi’anChina

Personalised recommendations