Advertisement

The Visual Computer

, Volume 35, Issue 4, pp 609–622 | Cite as

Importance-based approach for rough drawings

  • Heekyung Yang
  • Kyungha MinEmail author
Original Article
  • 119 Downloads

Abstract

We present a framework for producing rough drawings from photographs. Depicting a scene using a series of lines is one of the most effective methods of visual communication. Our framework for rough drawing is comprised of three steps: extracting lines from images, estimating line importance, and producing strokes that express various styles. To extract lines, we employ the widely used difference-of-Gaussian filter approach to devise a fault-correcting line shift scheme. Line importance is estimated by combining gradient and saliency. To obtain an efficient saliency estimation, we propose a stochastic content-based method. Various styles of rough drawings are produced by convoluting adaptive stroke texture segments, which are prepared by sampling real stroke texture images. We test our framework on various images and compare our results with real artwork and other schemes.

Keywords

Rough drawing Pencil Charcoal Convolution Saliency DoG 

Notes

Acknowledgements

This research was supported by the grant NRF-2017R1D1A1B03034137 and NRF-2015R1D1A1A01061415 from National Research Foundation (NRF) of Korea.

References

  1. 1.
    AlMeraj, Z., Wyvill, B., Isenberg, T., Gooch, A., Richard, G.: Automatically mimicking unique hand-drawn pencil lines. Comput. Graph. 33(4), 496–508 (2009)CrossRefGoogle Scholar
  2. 2.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intel 8(6), 679–698 (1986)CrossRefGoogle Scholar
  3. 3.
    Cole, F., Golovinskiy, A., Limpaecher, A., Barros, H., Finkelstein, A., Funkhouser, T., Rusinkiewicz, S.: Where do people draw lines? ACM Trans. Graph. 27(3), 88 (2008)CrossRefGoogle Scholar
  4. 4.
    DeCarlo, D., Santella, A.: Stylization and abstraction of photographs. Proc. Siggraph 2002, 769–776 (2002)Google Scholar
  5. 5.
    Dooley, D., Cohen, M.: Automatic illustration of 3d geometric models: surfaces. IEEE Comput. Graph. App. 13(2), 307–314 (1990)Google Scholar
  6. 6.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intel 34(10), 1915–1926 (2012)CrossRefGoogle Scholar
  7. 7.
    Gooch, B., Reinhard, E., Googh, A.: Human facial illustrations: creation and psychophysical evaluation. ACM Trans. Graph. 23(1), 27–44 (2004)CrossRefGoogle Scholar
  8. 8.
    Guo, C., Zhu, S.C., Wu, Y.N.: Primal sketch: integrating texture and structure. J. Comput. Vis. Imag. Under 106(1), 5–19 (2007)CrossRefGoogle Scholar
  9. 9.
    Hata, M., Toyoura, M., Mao, X.: Automatic generation of accentuated pencil drawing with saliency map and LIC. Vis. Comput. 28(6–8), 657–668 (2012)CrossRefGoogle Scholar
  10. 10.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. Proc. ICCV 2009, 2106–2113 (2009)Google Scholar
  11. 11.
    Kang, H., Lee, S., Chui, C.: Coherent line drawing. Proc. NPAR 2007, 43–50 (2007)CrossRefGoogle Scholar
  12. 12.
    Kim, Y., Lee, Y., Kang, H., Lee, S.: Stereoscopic 3d line drawing. ACM Trans. Graph. 32(4), 57 (2013)Google Scholar
  13. 13.
    Lake, A., Marshall, C., Harris, M., Blackstein, M.: Stylized rendering techniques for scalable real-time 3d animation. Proc. NPAR 00, 13–20 (2000)CrossRefGoogle Scholar
  14. 14.
    Lee, H., Kwon, S., Lee, S.: Real-time pencil rendering. Proc. NPAR 06, 37–45 (2006)CrossRefGoogle Scholar
  15. 15.
    Li, C., Liu, X., Wong, T.T.: Deep extraction of manga structural lines. ACM Trans. Graph. 36(4), 117 (2017)Google Scholar
  16. 16.
    Litwinowicz, P.: Processing images and video for an impressionist effect. Proc. Siggraph 97, 407–414 (1997)CrossRefGoogle Scholar
  17. 17.
    Lu, C., Xu, L., Jia, J.: Combining sketch and tone for pencil drawing production. Proc. NPAR 2012, 65–73 (2012)Google Scholar
  18. 18.
    McCool, M., Fiume, E.: Hierarchical poisson disk sampling distributions. Proc. Graph. Interface 92, 94–105 (1992)Google Scholar
  19. 19.
    Papari, G., Petkov, N.: Edge and line oriented contour detection: state of the art. Imag. Vis. Comput. 29(2), 79–103 (2011)CrossRefGoogle Scholar
  20. 20.
    Salisbury, M., Anderson, S., Barzel, R., Salesin, D.: Interactive pen-and-ink illustration. In: Proceedings of the Siggraph, vol. 94, pp. 101–108 (1994)Google Scholar
  21. 21.
    Son, M., Kang, H., Lee, Y., Lee, S.: Abstract line drawings from 2d images. Proc. Pac. Graph. 2007, 333–342 (2007)Google Scholar
  22. 22.
    Spicker, M., Kratt, J., Arellano, D., Deussen, O.: Depth-aware coherent line drawings. In: Proceedings of Sigraph Asia Technical Briefs 2015, p. 1 (2015)Google Scholar
  23. 23.
    Suarez, J., Belhadj, F., Boyer, V.: Real-time 3d rendering with hatching. Vis. Comput. 33(10), 1319–1334 (2017)CrossRefGoogle Scholar
  24. 24.
    Winnemoeller, H.: Xdog: advanced image stylization with extended difference-of-Gaussians. Proc. NPAR 2011, 147–156 (2011)Google Scholar
  25. 25.
    Winnemoeller, H., Olsen, S., Gooch, B.: Real-time video abstraction. ACM Trans. Graph. 25(3), 1221–1226 (2006)CrossRefGoogle Scholar
  26. 26.
    Yang, H., Kwon, Y., Min, K.: A stylized approach for pencil drawing from photographs. Comput. Graph. Forum 31(4), 1471–1480 (2012)CrossRefGoogle Scholar
  27. 27.
    Yang, H., Min, K.: Feature-guided convolution for pencil rendering. KSII Trans. Internet Inf. Syst. 5(7), 1311–1328 (2011)Google Scholar
  28. 28.
    Yang, H., Min, K.: A multi-layered framework for color pastel painting. KSII Trans. Internet Inf. Syst. 11(6), 3143–3165 (2017)Google Scholar
  29. 29.
    Ye, C., Zheng, Y.: A survey on image segmentation using geometric active contour models. Proc. ICEICE 2012, 233–236 (2012)Google Scholar
  30. 30.
    Zeng, K., Zhao, M., Xiong, C., Zhu, S.: From image parsing to painterly rendering. ACM Trans. Graph. 29(1), 2:1–2:11 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceSangmyung UniversitySeoulSouth Korea

Personalised recommendations