Skip to main content

Structure-preserving NPR framework for image abstraction and stylization


This work presents a structure-preserving non-photorealistic rendering (NPR) framework that can produce an effective structure-preserving abstracted and stylized output by manipulating visual features from 2D color image. The proposed framework distills the prominent structural features, dominant edges, medium-scale details, curved discontinued edges, silhouette, dendritic structures and curved boundaries and suppresses the superfluous details like noise, texture, irregular gradients, small-scale details and block artifacts. This framework effectively enhanced the significant image properties such as color, contrast, edge strength and sharpness at every stage based on the obtained statistical features availability information and the predefined conditions. This leads to enhancement of quality assessment features such as PSNR, SSIM and suppressing the image complexity and noise. It considers image and object space information to produce abstraction and stylization, thereby identifying emphasized elements of the structure using Harris feature detector algorithm. The proposed framework effectively preserves the structural features in the foreground of an image by comprehensively integrating the sequence of NPR image filters through rigorous experimental analysis simultaneously diminishing the background content of an image. Implementation of the proposed work is carried out in MATLAB 2018 with high-performance computer of 6.6 teraflop/s computing environment and Nvidia Tesla P100 GPU. The proposed framework evaluates every stage output with various subjective matters and quality assessment techniques with various statistical essences. By this manner, contextual features in an image have been identified and well preserved. Effectiveness of the proposed work has been validated by conducting the experiments taking David Mould dataset and Flickr images as references and comparing the obtained results with similar contemporary work cited in the literature. In addition, user’s visual feedback and the standard quality assessment techniques were also used to evaluate the work. Finally, this work lists out the structures preserving applications, constraints, framework implementation challenges and future work in the fields of image abstraction and stylization.

This is a preview of subscription content, access via your institution.

Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33
Fig. 34
Fig. 35
Fig. 36
Fig. 37
Fig. 38
Fig. 39
Fig. 40
Fig. 41
Fig. 42
Fig. 43


  1. 1.

    Kumar MPP, Poornima B, Nagendraswamy HS et al (2019) A comprehensive survey on non-photorealistic rendering and benchmark developments for image abstraction and stylization. Iran J Comput Sci 2:131.

    Article  Google Scholar 

  2. 2.

    Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), Bombay, India, pp 839–846.

  3. 3.

    Yang Q (2012) Recursive bilateral filtering. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C (eds) Computer vision–ECCV 2012. ECCV 2012. Lecture notes in computer science, vol 7572, Springer, Berlin

  4. 4.

    Antonio C, Toby S, Carsten R, Patrick P (2010) Geodesic image and video editing. In: ACM transactions on graphics. 29, 5, Article 134 (November 2010), pp 15.

  5. 5.

    He K, Sun J, Tang X (2013) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409.

    Article  Google Scholar 

  6. 6.

    Bao L, Song Y, Yang Q, Yuan H, Wang G (2014) Tree filtering: efficient structure-preserving smoothing with a minimum spanning tree. IEEE Trans Image Process 23(2):555–569

    MathSciNet  Article  Google Scholar 

  7. 7.

    Nagendra SHS, Pavan KMP (2013) An integrated filter based approach for image abstraction and stylization. In: Swamy P, Guru D (eds) Multimedia processing, communication and computing applications. Lecture notes in electrical engineering, vol 213. Springer, New Delhi

    Google Scholar 

  8. 8.

    Jan EK, John C, Tinghuai W, Tobias I (2013) State of the Art: a taxonomy of artistic stylization techniques for images and video. IEEE Trans Vis Comput Gr 19:5:866–885.

  9. 9.

    Orzan A, Bousseau P, Barla J, Thollot J (2007) Structure-preserving manipulation of photographs. In: International symposium on non-photorealistic animation and rendering (NPAR)

  10. 10.

    Doug D, Anthony S (2002) Stylization and abstraction of photographs. In: Proceedings of the 29th annual conference on computer graphics and interactive techniques (SIGGRAPH '02). ACM, New York, NY, USA, 769–776.

  11. 11.

    Hegde S, Gatzidis C, Tian F (2013) Painterly rendering techniques: a state-of-the-art review of current approaches. Comput Animat Virtual Worlds 24(1):43–64.

    Article  Google Scholar 

  12. 12.

    Kumar MPP, Poornima B, Nagendraswamy HS, Manjunath C, Rangaswamy BE (2020) Structure preserving image abstraction and artistic stylization from complex background and low illuminated images. J Image Video Process.

    Article  Google Scholar 

  13. 13.

    Liu Y, Yu M, Fu Q et al (2016) Cognitive mechanism related to line drawings and its applications in intelligent process of visual media: a survey. Front Comput Sci 10:216–232.

    Article  Google Scholar 

  14. 14.

    Kumar P, Swamy N (2013) Line drawing for conveying shapes in HDR images. Int J Innovations Eng Technol 2(2):353–362 (ISSN 2319-1058)

  15. 15.

    Stéphane G, Emmanuel T, Frédo D, François XS (2010) Programmable rendering of line drawing from 3D scenes. ACM Trans Graph 29, 2, Article 18 (April 2010), pp 20.

  16. 16.

    Kolomenkin M, Leifman G, Shimshoni I, Tal A (2011) Reconstruction of relief objects from line drawings. In: CVPR 2011, Providence, RI, 2011, pp 993–1000.

  17. 17.

    Marc S, Julian K, Diana A, Oliver D (2015) Depth-aware coherent line drawings. In: SIGGRAPH Asia 2015 technical briefs (SA ’15). Association for computing machinery, New York, NY, USA, Article 1, pp 1–5.

  18. 18.

    Henry K, Seungyong L, Charles KC (2007) Coherent line drawing. In: Proceedings of the 5th international symposium on non-photorealistic animation and rendering (NPAR ’07). Association for Computing Machinery, New York, NY, USA, pp 43–50.

  19. 19.

    Ito KM, Yusuke Y, Toshihiko AK (2015) Separation of manga line drawings and screentones.

  20. 20.

    Lou L, Wang L, Meng X (2015) Stylized strokes for coherent line drawings. Comp Visual Media 1:79–89.

    Article  Google Scholar 

  21. 21.

    Papari G, Petkov N, Campisi P (2007) Artistic edge and corner enhancing smoothing. Trans Image Process 16(10):2449–2462.

    MathSciNet  Article  Google Scholar 

  22. 22.

    Hanieh S, Michael N, Steve D (2017) Saliency-based artistic abstraction with deep learning and regression trees. J Image Sci Technol 61(6):60402–1–60402–9

    Google Scholar 

  23. 23.

    Ameya D, Shanmuganathan R (2016) Adaptive artistic stylization of images. In: Proceedings of the 10th Indian conference on computer vision, graphics and image processing (ICVGIP '16). ACM, New York, NY, USA, Article 3, pp 8.

  24. 24.

    Peter O'D, Aaron H (2012) AniPaint: interactive painterly animation from video. In: IEEE transactions on visualization and computer graphics 18, 3 (March 2012), pp 475–487.

  25. 25.

    Mingtian Z, Song-Chun Z (2013) Abstract painting with interactive control of perceptual entropy. In: ACM transaction on applied perception, 10, 1, Article 5 (March 2013), p 21.

  26. 26.

    Amir S, Daniel L, Jan EK, Jürgen D (2016). Image stylization by interactive oil paint filtering. In: Computer graphics, 55, (April 2016), pp 157–171.

  27. 27.

    Wenchao H, Zhonggui C, Hao P, Yizhou Y, Eitan G, Wenping W (2016) Surface mosaic synthesis with irregular tiles. In: IEEE transactions on visualization and computer graphics. 22, 3, pp 1302–1313.

  28. 28.

    Dmitry U, Vadim L, Andrea V, Victor L (2016) Texture networks: feed-forward synthesis of textures and stylized images. In: Maria Florina Balcan and Kilian Q. Weinberger (eds) proceedings of the 33rd international conference on machine learning,  (ICML'16), vol. 48. pp 1349–1357

  29. 29.

    Wang X, Oxholm G, Zhang D, Wang YF (2017) Multimodal transfer: a hierarchical deep convolution neural network for fast artistic style transfer. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp 7178–7186

  30. 30.

    Luan F, Paris S, Shechtman E, Bala K (2017) Deep photo style transfer. In: Proceedings of the ieee conference on computer vision and pattern recognition. IEEE, 2017, 1703.07511

  31. 31.

    Fuchen Z, Zhuo S, Xiangguo L, Yu-Kun L, Paul LR (2017) Example-based image colorization using locality consistent sparse representation. In: IEEE transactions on image processing. vol. 26, Issue: 11, Nov. 2017.

  32. 32.

    Kagaya M, Brendel W, Deng Q, Kesterson T, Todorovic S, Neill PJ, Zhang E (2011) Video painting with space-time-varying style parameters. IEEE Trans Visual Comput Gr 17(1):74–87

    Article  Google Scholar 

  33. 33.

    Kun Z, Mingtian Z, Caiming X, Song-Chun Z (2009) From image parsing to painterly rendering. In: ACM transaction on graphics 29,1, Article 2( December 2009) p 11.

  34. 34.

    Kang H, Lee S, Chui CK (2007) Coherent line drawing. In: Proceedings of the 5th international symposium on non-photorealistic animation and rendering (NPAR '07). ACM, New York, NY, USA, pp 43–50.

  35. 35.

    Hyunjoon L, Junho J, Junho K, Seungyong L (2017) Structure-texture decomposition of images with interval gradient. Comput Graph Forum 36(6):262–274.

    Article  Google Scholar 

  36. 36.

    Kang H, Lee S (2008) Shape-simplifying image abstraction. Comput Graph Forum 27:1773–1780.

    MathSciNet  Article  Google Scholar 

  37. 37.

    David M (2013) Image and video abstraction using cumulative range geodesic filtering. Comput Graph 37(5):413–430.

    Article  Google Scholar 

  38. 38.

    Machado P, Cardoso A (1998) Computing aethetics. In: Proceedings of the 14th Brazilian symposium on artificial intelligence: advances in artificial intelligence (SBIA '98), Flávio Moreira de Oliveira (ed), Springer-Verlag, London, UK, pp 219-228

  39. 39.

    Kyprianidis JE, Kang H, Döllner J (2009) Image and video abstraction by anisotropic kuwahara filtering. Comput Graph Forum 28(7):1955–1963.

    Article  Google Scholar 

  40. 40.

    Bahrami K, Kot AC (2014) A fast approach for no-reference image sharpness assessment based on maximumlocal variation. IEEE Signal Process Lett 21(6):751–755.

    Article  Google Scholar 

  41. 41.

    Chen W, Wen C, Kou F, Li Z (2015) Gradient domain guided image filtering. IEEE Trans Image Process 24(11):4528–4539

    MathSciNet  Article  Google Scholar 

  42. 42.

    David M (2012) Texture-preserving abstraction. In: proceedings of the symposium on non-photorealistic animation and rendering (NPAR '12). Eurographics Association, Goslar Germany, Germany, pp 75–82

  43. 43.

    Regan L, Mandryk DM, Hua L (2011) Evaluation of emotional response to non-photorealistic images. In: Proceedings of the ACM SIGGRAPH/Eurographics symposium on non-photorealistic animation and rendering (NPAR '11), Stephen N. Spencer (ed.). ACM, New York, NY, USA, pp 7–16.

  44. 44.

    Richter M, Sochting M, Semmo A, Dollner J, Trapp M (2018) Service-based processing and provisioning of image-abstraction techniques. In: Proceedings international conference on computer graphics, visualization and computer vision (WSCG), pp. 97–106,

  45. 45.

    Lee J, Choi J, Seo S (2020) Emotion-inspired painterly rendering. IEEE Access 8:104565–104578.

    Article  Google Scholar 

  46. 46.

    Lawonn K, Günther T (2018) Stylized image triangulation. Comput Graph Forum.

    Article  Google Scholar 

  47. 47.

    Kim J, Lee J (2020) Layered non-photorealistic rendering with anisotropic depth-of-field filtering. Multimed Tools Appl 79:1291–1309.

    Article  Google Scholar 

  48. 48.

    Paul LR, Yu-Kun L, David M, Ran Y, Itamar B, Lars D, Seungyong L, Chuan L, Yong-Jin L, Amir S, Ariel S, Minjung S, Holger W (2020) NPRportrait 1.0: a three-level benchmark for non-photorealistic rendering of portraits, arXiv preprint 2009.00633

  49. 49.

    Garcia-Dorado I, Getreuer P, Wronski B, Milanfar P (2020) Image stylisation: from predefined to personalized. In: IET computer vision, vol. 14, no. 6, pp 291–303, 9.

  50. 50.

    Besançon L, Semmo A, Biau D, Frachet B, Pineau V, Sariali EH, Soubeyrand M, Taouachi R, Isenberg T, Dragicevic P (2020) Reducing affective responses to surgical images and videos through stylization. Comput Graph Forum 39:462–483.

    Article  Google Scholar 

  51. 51.

    Qingnan F, Jiaolong Y, David W, Baoquan C, Xin T (2018) Image smoothing via unsupervised learning. In ACM transaction on graphics 37, 6, Article 259, p 14.

  52. 52.

    Liu Y, Ma X, Li X, Zhang C (2020) Two-stage image smoothing based on edge-patch histogram equalisation and patch decomposition. In: IET image processing, vol. 14, no. 6, pp 1132–1140.

  53. 53.

    Athina P, Xiaoting Z, Tammy Q, Xing-Dong Y, Emily W (2020) Tactile line drawings for improved shape understanding in blind and visually impaired users. <i>In: ACM transaction on graphics</i> 39, 4, Article 89 (July 2020), p 13.

  54. 54.

    Amir S, Sebastian P (2020) Graphite: interactive photo-to-drawing stylization on mobile devices. In: ACM SIGGRAPH 2020 SIGGRAPH. Association for Computing Machinery, New York, NY, USA, Article 3, pp 1–2.

  55. 55.

    Byungsoo K, Vinicius CA, Markus G, Barbara S (2019) Transport-based neural style transfer for smoke simulations. <i>In: ACM transaction on graphics</i> 38, 6, Article 188 (November 2019), p 11.

  56. 56.

    Liu S, Zhao C, Gao Y, Wang J, Tang M (2019) Adversarial image generation by combining content and style. IET Image process 13(14):2716–2723,

  57. 57.

    Cao Z, Niu S, Zhang J, Wang X (2020) MRGAN: a generative adversarial networks model for global mosaic removal. IET Image Process 14(10): 2235–2240.

  58. 58.

    Zhu C, Yan W, Cai X, Liu S, Li TH, Li SG (2020) Neural saliency algorithm guide bi-directional visual perception style transfer. CAAI Trans Intell Technol 5(1): 1–8.

  59. 59.

    Cheng M-M, Liu X-C, Wang J, Lu S-P, Lai Y-K, Rosin PL (2020) Structure-preserving neural style transfer. IEEE Trans Image Process 29:909–920.

    MathSciNet  Article  Google Scholar 

  60. 60.

    Yoon YB, Kim MS, Choi HC (2018) End-to-end learning for arbitrary image style transfer. Electron Lett 54(22):1276–1278.

    Article  Google Scholar 

  61. 61.

    Uchida M, Saito S (2020) Stylized line-drawing of 3D models using CNN with line property encoding. Comput Graph 91(2020):252–264.

    Article  Google Scholar 

  62. 62.

    Jingwei Z, Ran-Zan W, Xu D (2017) Automatic generation of sketch-like pencil drawing from image. In: 2017 IEEE international conference on multimedia and expo workshops (ICMEW), Hong Kong, 2017, pp 261–266.

  63. 63.

    Yi R, Xia M, Liu Y, Lai Y, Rosin PL Line drawings for face portraits from photos using global and local structure based GANs. IEEE Trans Pattern Anal Machine Intell.

  64. 64.

    Hasler D, Suesstrunk ES (2003) Measuring colorfulness in natural images. In: Proceedings of SPIE-the international society for optical engineering, vol. 5007, pp 87–95.

    Article  Google Scholar 

  65. 65.

    Penousal M, Amílcar C (1998) Computing aethetics. In: Proceedings of the 14th Brazilian symposium on artificial intelligence: advances in artificial intelligence (SBIA '98), Flávio Moreira de Oliveira (eds) Springer-Verlag, London, UK, pp 219–228

  66. 66.

    Krešimir M, László N, Attila N, Thomas P, Werner P (2005) Global contrast factor: a new approach to image contrast. In: Proceedings of the 1st eurographics conference on computational aesthetics in graphics, visualization and imaging (computational aesthetics'05), László Neumann, Mateu Sbert, Bruce Gooch, and Werner Purgathofer (eds) Eurographics Association, Aire-la-Ville, Switzerland, pp 159–167.

  67. 67.

    Bahrami K, Kot AC (2014) A fast approach for no-reference image sharpness assessment based on maximum local variation. In: IEEE signal processing letters, vol. 21, no. 6, pp 751–755, June 2014.

  68. 68.

    Smith SM, Brady JM (1997) Susan-a new approach to low level image processing. Int J Comput Vis Springer 23(1):45–78.

    Article  Google Scholar 

  69. 69.

    John I (1996) Fast noise variance estimation. Comput Vis Image Underst 64(2):300–302.

    Article  Google Scholar 

  70. 70.

    Garcia V, Eric D, Barlaud M (2007) Region of interest tracking based on key point trajectories.on a group of pictures. In: International workshop on content-based multimedia indexing, Bordeaux, pp 198–203.

  71. 71.

    Mikolajczyk K, Schmid C (2004) Scale & affine invariant interest point detectors. Int J Comput Vis 60:63–86.

    Article  Google Scholar 

  72. 72.

    Romano Y, Elad M, Milanfar P (2017) The little engine that could: regularization by denoising (RED). SIAM J Imag Sci 10(4):1804–1844.

    MathSciNet  Article  MATH  Google Scholar 

  73. 73.

    Cai Z, Ye L, Yang A (2012) Flood fill/maze solving with expected toll of Penetrating unknown walls for micro mouse. In: IEEE 14th international conference on high performance computing and communication and 2012 IEEE 9th international conference on embedded software and systems.

  74. 74.

    Cheng J, Li Z, Gu Z, Fu H, Wong DWK, Liu J (2018) Structure-preserving guided retinal image filtering and its application for optic disk analysis. IEEE Trans Med Imag 37(11):2536–2546.

    Article  Google Scholar 

  75. 75.

    Nakamura NF (2017) Fast implementation of box filtering. In: International workshop on advanced image technology (IWAIT)

  76. 76.

    Zheng J, Li Z, Zhu Z, Yao W, Wu S (2015) Weighted guided image filtering. IEEE Trans Image Process 24(1):120–129

    MathSciNet  Article  Google Scholar 

  77. 77.

    Miyata Y, Kimiyoshi T, Norimichi H, Hideaki M (2000) Restoration of noisy images using wiener filters designed in color space. society for imaging science and technology: image processing, image quality. In: Image capture, systems conference. pp 301–306

  78. 78.

    Buchsbaum G (1980) A spatial processor model for object colour perception. J Franklin Inst 310(1): 1–26, ISSN 0016-0032.

  79. 79.

    van de Weijer J, Gevers T, Gijsenij A (2007) Edge-based color constancy. IEEE Trans Image Process 16(9):2207–2214.

    MathSciNet  Article  Google Scholar 

  80. 80.

    Gijsenij A, Gevers T, van de Weijer J (2012) Improving color constancy by photometric edge weighting. IEEE Trans Pattern Anal Machine Intell 34(5):918–929.

    Article  Google Scholar 

  81. 81.

    Rosa A, David M (2017) Detail and color enhancement in photo stylization. In: Proceedings of the symposium on computational aesthetics (CAE '17), Stephen N. Spencer (ed.). ACM, New York, NY, USA, Article 5, p 11.

  82. 82.

    Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 12(7):629–639.

    Article  Google Scholar 

  83. 83.

    Bartyzel KS (2016) Adaptive Kuwahara filter. In: Signal, image and video processing. April 2016, vol. 10(4), pp 663–670

  84. 84.

    Zhang K, Zuo W, Chen Y, Meng D, Zhang L (2017) Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process 26(7):3142–3155

    MathSciNet  Article  Google Scholar 

  85. 85.

    Floyd RW, Steinberg L (1975) An adaptive algorithm for spatial gray scale. In: International symposium digest of technical papers, society for information displays, p 36

  86. 86.

    Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66

    MathSciNet  Article  Google Scholar 

  87. 87.

    Henry K, Seungyong L, Charles KC (2009) Flow-based image abstraction. IEEE Trans Vis Comput Graph 15(1):62–76.

    Article  Google Scholar 

  88. 88.

    Henry K, Seungyong L, Charles KC (2007) Coherent line drawing. In: Proceedings of the 5th international symposium on non-photorealistic animation and rendering (NPAR '07). ACM, New York, NY, USA, pp 43–50.

  89. 89.

    David M, Paul LR (2016) A benchmark image set for evaluating stylization. In: Proceedings of the joint symposium on computational aesthetics and sketch based interfaces and modeling and non-photorealistic animation and rendering (Expresive '16). Eurographics Association, Aire-la-Ville, Switzerland, pp 11–20

  90. 90.

    David M, Paul LR (2017) Developing and applying a benchmark for evaluating image stylization. Comput Gr 67:58–76.

    Article  Google Scholar 

  91. 91.

    De Arruda FAPV, de Queiroz JER, Gomes HMJ (2012) Non-photorealistic neural-sketching. Braz Comput Soc 18:237.

    Article  Google Scholar 

  92. 92.

    Yusra AYA, Der CS (2012) Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI. Int J Sci Eng Res, vol. 3, Issue 8, August-2012 1 ISSN 2229–5518

  93. 93.

    Eric K, Jan S, Amir K, Henry D, Jürgen K (2011) Image and video abstraction by coherence-enhancing filtering. Comput Gr Forum 30:593–602.

    Article  Google Scholar 

  94. 94.

    Kyprianidis J, Jan D, Jürgen K (2008) Image abstraction by structure adaptive filtering. In: Theory and practice of computer graphics 2008, TPCG 2008–eurographics UK chapter proceedings. pp 51–58.

  95. 95.

    Eric K, Jan S, Amir K, Henry D, Jürgen K (2010) Anisotropic kuwahara filtering with polynomial weighting functions, pp 25–30.

  96. 96.

    Li X, Cewu L, Yi X, Jiaya J (2011) Image smoothing via L0 gradient minimization. In: ACM transaction on graphics 30, 6, Article 174 (December 2011), p 12.

Download references


This research work is funded by vision group of science and technology, Government of Karnataka, under K-FIST L2 (Grant No: No.KSTePS/VGST-K_FIST L2/2019-20/GRD No.758/315) with the INR of 40 lakhs. We, the authors, offer our heartfelt thanks to High Performance Computing Lab, Department of Studies in Computer Science, University of Mysore, Mysore, Karnataka, India, for facilitating the high-speed computation. We also thank the following authors and resource persons for their courtesy in publishing the images pertaining to their work: (1) Dr David Mould, Carleton University, Ottawa, Ontario, Canada; (2) Ar. Yeshaswini Rondamath, Asst. Professor, BMSSA, Bangalore; (3) Prof. L. Xu, Image & Visual Computing Lab, Lenovo R&T, Hong Kong; (4) Q. Zhang Chinese University of Hong Kong; (5) Dr Jan Eric Kypriandis Hamm-Lippstadt university of applied Science, Germany; and (6) Dr Henry Kang, Korea Advanced Institute of Technology. Korea.

Author information



Corresponding author

Correspondence to M. P. Pavan Kumar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kumar, M.P.P., Poornima, B., Nagendraswamy, H.S. et al. Structure-preserving NPR framework for image abstraction and stylization. J Supercomput 77, 8445–8513 (2021).

Download citation


  • Non-photorealistic rendering (NPR)
  • Image abstraction and stylization
  • Denoising convolutional neural network (DnCNN)
  • Graphical processing unit (GPU)
  • Structure interval gradient filtering (SIGF)