Skip to main content

NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits

Abstract

Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset.

References

  1. Kyprianidis, J. E.; Collomosse, J.; Wang, T. H.; Isenberg, T. State of the “art”: A taxonomy of artistic stylization techniques for images and video. IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 5, 866–885, 2013.

    Article  Google Scholar 

  2. Rosin, P.; Collomosse, J. Image and Video-Based Artistic Stylisation. London: Springer London, 2013.

    Book  Google Scholar 

  3. Gatys, L. A.; Ecker, A. S.; Bethge, M. Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2414–2423, 2016.

  4. Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365–3385, 2020.

    Article  Google Scholar 

  5. Semmo, A.; Isenberg, T.; Döllner, J. Neural style transfer: A paradigm shift for image-based artistic rendering? In: Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Article No. 5, 2017.

  6. Gooch, A. A.; Long, J.; Ji, L.; Estey, A.; Gooch, B. S. Viewing progress in non-photorealistic rendering through Heinlein’s lens. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, 165–171, 2010.

  7. Hall, P.; Lehmann, A.-S. Don’t measure—Appreciate! NPR seen through the prism of art history. In: Image and Video-Based Artistic Stylisation. Computational Imaging and Vision, Vol. 42. Rosin, P.; Collomosse, J. Eds. Springe London, 333–351, 2013.

  8. Mould, D.; Rosin, P. L. Developing and applying a benchmark for evaluating image stylization. Computers & Graphics Vol. 67, 58–76, 2017.

    Article  Google Scholar 

  9. Rosin, P. L.; Mould, D.; Berger, I.; Collomosse, J.; Lai, Y.; Li, C.; Li, H.; Shamir, A.; Wand, M.; Wang, T.; et al. Benchmarking non-photorealistic rendering of portraits. In: Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Article No. 11, 2017.

  10. Fisher, R. B. CVonline. Available at http://homepages.inf.ed.ac.uk/rbf/CVonline.

  11. Kumar, M. P. P.; Poornima, B.; Nagendraswamy, H. S.; Manjunath, C. A comprehensive survey on non-photorealistic rendering and benchmark developments for image abstraction and stylization. Iran Journal of Computer Science Vol. 2, No. 3, 131–165, 2019.

    Article  Google Scholar 

  12. Buolamwini, J.; Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Proceedings of the Conference on Fairness, Accountability and Transparency, 77–91, 2018.

  13. Azami, R.; Mould, D. Detail and color enhancement in photo stylization. In: Proceedings of the Symposium on Computational Aesthetics, Article No. 5, 2017.

  14. Du, L. How much deep learning does neural style transfer really need? An ablation study. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 3139–3148, 2020.

  15. Rosin, P. L.; Lai, Y. K. Watercolour rendering of portraits. In: Image and Video Technology. Lecture Notes in Computer Science, Vol. 10799. Satoh, S. Ed. Springer Cham, 268–282, 2018.

  16. Wu, T.; Chen, X.; Lu, L. Q. Field coupling-based image filter for sand painting stylization. Mathematical Problems in Engineering Vol. 2018, 3670498, 2018.

    Google Scholar 

  17. Low, P. E.; Wong, L. K.; See, J.; Ng, R. Pic2PolyArt: Transforming a photograph into polygon-based geometric art. Signal Processing: Image Communication Vol. 91, 116090, 2021.

    Google Scholar 

  18. Meier, P.; Lohweg, V. Content representation for neural style transfer algorithms based on structural similarity. In: Proceedings of the Computational Intelligence Workshop, 2019.

  19. Shen, Q.; Zou, L.; Wang, F. J.; Huang, Z. J. A scale-adaptive color preservation neural style transfer method. In: Proceedings of the 5th International Conference on Mathematics and Artificial Intelligence, 5–9, 2020.

  20. Klingbeil, M.; Pasewaldt, S.; Semmo, A.; Döllner, J. Challenges in user experience design of image filtering apps. In: Proceedings of the SIGGRAPH Asia Mobile Graphics & Interactive Applications, Article No. 22, 2017.

  21. Trapp, M.; Pasewaldt, S.; Dürschmid, T.; Semmo, A.; Döllner, J. Teaching image-processing programming for mobile devices: A software development perspective. In: Proceedings of the Annual European Association for Computer Graphics Conference: Education Papers, 17–24, 2018.

  22. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.

    Article  Google Scholar 

  23. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 586–595, 2018.

  24. Zamir, S. W.; Vazquez-Corral, J.; Bertalmío, M. Vision models for wide color gamut imaging in cinema. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 43, No. 5, 1777–1790, 2019.

    Article  Google Scholar 

  25. Kettunen, M.; Härkönen, E.; Lehtinen, J. E-LPIPS: Robust perceptual image similarity via random transformation ensembles. arXiv preprint arXiv:1906.03973, 2019.

  26. Moorthy, A. K.; Bovik, A. C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing Vol. 20, No. 12, 3350–3364, 2011.

    MathSciNet  Article  Google Scholar 

  27. Mittal, A.; Moorthy, A. K.; Bovik, A. C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing Vol. 21, No. 12, 4695–4708, 2012.

    MathSciNet  Article  Google Scholar 

  28. Zhang, L.; Zhang, L.; Bovik, A. C. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing Vol. 24, No. 8, 2579–2591, 2015.

    MathSciNet  Article  Google Scholar 

  29. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6629–6640, 2017.

  30. Bińkowski, M.; Sutherland, D. J.; Arbel, M.; Gretton, A. Demystifying MMD GANs. In: Proceedings of the 6th International Conference on Learning Representations, 2018.

  31. Isenberg, T. Evaluating and validating non-photorealistic and illustrative rendering. In: Image and Video-Based Artistic Stylisation. Computational Imaging and Vision, Vol. 42. Rosin, P.; Collomosse, J. Eds. Springer London, 311–331, 2013.

  32. Hertzmann, A. Non-Photorealistic Rendering and the science of art. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, 147–157, 2010.

  33. Mould, D. Authorial subjective evaluation of non-photorealistic images. In: Proceedings of the Workshop on Non-Photorealistic Animation and Rendering, 49–56, 2014.

  34. Li, Y. Z.; Kobatake, H. Extraction of facial sketch images and expression transformation based on FACS. In: Proceedings of the International Conference on Image Processing, 520–523, 1995.

  35. Yaniv, J.; Newman, Y.; Shamir, A. The face of art: Landmark detection and geometric style in portraits. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 60, 2019.

  36. Zhao, M.; Zhu, S.-C. Artistic rendering of portraits. In: Image and Video-Based Artistic Stylisation. Computational Imaging and Vision, Vol. 42. Rosin, P.; Collomosse, J. Eds. Springer London, 237–253, 2013.

  37. Li, C.; Wand, M. Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2479–2486, 2016.

  38. Berger, I.; Shamir, A.; Mahler, M.; Carter, E.; Hodgins, J. Style and abstraction in portrait sketching. ACM Transactions on Graphics Vol. 32, No. 4, Article No. 55, 2013.

  39. Yi, R.; Liu, Y. J.; Lai, Y. K.; Rosin, P. L. APDrawingGAN: Generating artistic portrait drawings from face photos with hierarchical GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10735–10744, 2019.

  40. Rosin, P. L.; Lai, Y.-K. Non-photorealistic rendering of portraits. In: Proceedings of the workshop on Computational Aesthetics, 159–170, 2015.

  41. Winnemöller, H.; Kyprianidis, J. E.; Olsen, S. C. XDoG: An eXtended difference-of-Gaussians compendium including advanced image stylization. Computers & Graphics Vol. 36, No. 6, 740–753, 2012.

    Article  Google Scholar 

  42. Rosin, P. L.; Lai, Y. K. Image-based portrait engraving. arXiv preprint arXiv:2008.05336, 2020.

  43. Son, M.; Lee, Y. J.; Kang, H.; Lee, S. Structure grid for directional stippling. Graphical Models Vol. 73, No. 3, 74–87, 2011.

    Article  Google Scholar 

  44. Semmo, A.; Limberger, D.; Kyprianidis, J. E.; Döllner, J. Image stylization by interactive oil paint filtering. Computers & Graphics Vol. 55, 157–171, 2016.

    Article  Google Scholar 

  45. Doyle, L.; Anderson, F.; Choy, E.; Mould, D. Automated pebble mosaic stylization of images. Computational Visual Media Vol. 5, No. 1, 33–44, 2019.

    Article  Google Scholar 

  46. Bruce, V.; Young, A. Face Perception. Psychology Press, 2013.

  47. Van Koppen, P. J.; Lochun, S. K. Portraying perpetrators: The validity of offender descriptions by witnesses. Law and Human Behavior Vol. 21, No. 6, 661–685, 1997.

    Article  Google Scholar 

  48. Fahsing, I. A.; Ask, K.; Granhag, P. A. The man behind the mask: Accuracy and predictors of eyewitness offender descriptions. Journal of Applied Psychology Vol. 89, No. 4, 722–729, 2004.

    Article  Google Scholar 

  49. Dobs, K.; Isik, L.; Pantazis, D.; Kanwisher, N. How face perception unfolds over time. Nature Communications Vol. 10, No. 1, 1258, 2019.

    Article  Google Scholar 

  50. Wheeler, B. AlgDesign: Algorithmic experimental design. R package version 1.1–7. 2014. Available at https://cran.rproject.org/web/packages/AlgDesign/.

  51. Atkinson, A.; Donev, A.; Tobias, R. Optimum Experimental Designs, with SAS, Volume 34. Oxford University Press, 2007.

  52. Fedorov, V. Theory of Optimal Experiments. Academic Press, 1972.

  53. Doyle, R. Ethnic groups in the world. Scientific American Vol. 279, No. 3, 30, 1998.

    Article  Google Scholar 

  54. McLellan, B.; McKelvie, S. J. Effects of age and gender on perceived facial attractiveness. Canadian Journal of Behavioural Science/Revue Canadienne des Sciences du Comportement Vol. 25, No. 1, 135–142, 1993.

    Article  Google Scholar 

  55. Batres, C.; Kannan, M.; Perrett, D. I. Familiarity with own population’s appearance influences facial preferences. Human Nature Vol. 28, No. 3, 344–354, 2017.

    Article  Google Scholar 

  56. Cooper, P. A.; Maurer, D. The influence of recent experience on perceptions of attractiveness. Perception Vol. 37, No. 8, 1216–1226, 2008.

    Article  Google Scholar 

  57. Sanakoyeu, A.; Kotovenko, D.; Lang, S.; Ommer, B. A style-aware content loss for real-time HD style transfer. In: Computer Vision — ECCV 2018. Lecture Notes in Computer Science, Vol. 11212. Ferrari, V.; Hebert M.; Sminchisescu C.; Weiss, Y. Eds. Springer Cham, 698–714, 2018.

  58. Cha, S. H.; Srihari, S. N. On measuring the distance between histograms. Pattern Recognition Vol. 35, No. 6, 1355–1370, 2002.

    Article  Google Scholar 

  59. Wauthier, F. L.; Jordan, M. I.; Jojic, N. Efficient ranking from pairwise comparisons. In: Proceedings of the 30th International Conference on Machine Learning, Vol. 28, III-109–III-117, 2013.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul L. Rosin.

Ethics declarations

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Paul L. Rosin is a professor in the School of Computer Science and Informatics, Cardiff University, UK. He received his Ph.D. degree from City University, London, in 1988. Previous posts were at Brunel University, UK; the Institute for Remote Sensing Applications, Joint Research Centre, Italy; and Curtin University of Technology, Australia. His research interests include low-level image processing, performance evaluation, shape analysis, facial analysis, medical image analysis, 3D mesh processing, cellular automata, non-photorealistic rendering, and cultural heritage.

Yu-Kun Lai is a professor in the School of Computer Science and Informatics, Cardiff University. He received his B.S. and Ph.D. degrees in computer science from Tsinghua University, China, in 2003 and 2008 respectively. His research interests include computer graphics, computer vision, geometric modeling, and image processing.

David Mould received his Ph.D. degree from the University of Toronto in 2002. Following a faculty appointment at the University of Saskatchewan, he became a professor at Carleton University, where he founded the Graphics, Imaging, and Games Lab. He is broadly interested in algorithmic creation of aesthetic objects, including images, music, 3D models, and computer-mediated experiences. His research centres on computer graphics and interactive systems, with particular emphasis on image stylisation, computer games, and procedural modeling.

Ran Yi is a Ph.D. student in the Department of Computer Science and Technology, Tsinghua University, where she received her B.Eng. degree, in 2016. Her research interests include computational geometry, computer vision, and computer graphics.

Itamar Berger received his M.Sc. degree in computer science in 2012 from the Efi Arazi School of Computer Science at the Interdisciplinary Center in Israel, specializing in computer graphics, deep learning, and augmented reality.

Lars Doyle is a Ph.D. student in the School of Computer Science at Carleton University where he works in the Graphics, Imaging, and Games Lab. His research interests focus on image processing, image stylization, and superresolution. He received his master and bachelor degrees in computer science from Carleton University. Previously, he worked as a graphic designer.

Seungyong Lee is a professor of computer science and engineering at Pohang University of Science and Technology (POSTECH), Republic of Korea. He received his Ph.D. degree in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1995. His current research interests include image and video processing, deep learning based computational photography, and 3D scene reconstruction.

Chuan Li is a research scientist at Lambda Labs. His work focuses specifically on the convergence of computer graphics, computer vision, and machine learning. He completed his Ph.D. degree in image-based modeling at the University of Bath. Before joining Lambda Labs, he was a postdoctoral researcher at the Max Planck Institute of Informatics and a research associate at Utrecht University and at Mainz University. His research in visual data analysis and synthesis has been published at CVPR, ICCV, ECCV, NIPS, and SIGGRAPH.

Yong-Jin Liu is a tenured full professor in the Department of Computer Science and Technology, Tsinghua University. He received his B.Eng. degree from Tianjin University, China, in 1998, and his Ph.D. degree from Hong Kong University of Science and Technology, China, in 2004. His research interests include cognition computation, computational geometry, computer graphics, and computer vision.

Amir Semmo is a post-doctoral researcher with the Visual Computing & Visual Analytics group of the Hasso Plattner Institute, Germany, and is the head of R&D at Digital Masterpieces. In 2016, he received his doctoral degree on non-photorealistic rendering for 3D geospatial data. His main research topics include image and video processing, computer vision, and GPU computing. He is particularly interested in expressive rendering on mobile devices, image stylisation, and the processing of multi-dimensional video data.

Ariel Shamir is the dean of the Efi Arazi School of Computer Science at the Interdisciplinary Center in Israel. He received his Ph.D. degree in computer science in 2000 from the Hebrew University in Jerusalem, and spent two years as a postodctor at the University of Texas in Austin. He is currently an associate editor for ACM TOG and CVM. He was named one of the most highly cited researchers on the Thomson Reuters list in 2015. He has a broad commercial experience consulting for various companies. He specializes in geometric modeling, computer graphics, image processing, and machine learning.

Minjung Son received her B.S., M.S., and Ph.D. degrees from Pohang University of Science and Technology (POSTECH) in 2005, 2007, and 2014, respectively, all in computer science and engineering. Since 2014, she has been with the Samsung Advanced Institute of Technology, Suwon, Republic of Korea, as a senior/principal researcher.

Holger Winnemöller received his B.Sc., B.Sc. (Hons), and M.Sc. degrees in computer science from Rhodes University, South Africa, between 1998 and 2002. He then moved to the US, where in 2006 he received his Ph.D. degree from Northwestern University. Since 2007, he has been with Adobe Research in Seattle, Washington, where he is currently a principal scientist. His research domains include nonphotorealistic rendering and novel digital media, while his current research focuses on creative tools for aspiring (nonprofessional) artists and casual creativity.

Electronic supplementary material

41095_2021_255_MOESM1_ESM.pdf

NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits [Electronic supplementary material]

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rosin, P.L., Lai, YK., Mould, D. et al. NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits. Comp. Visual Media 8, 445–465 (2022). https://doi.org/10.1007/s41095-021-0255-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41095-021-0255-3

Keywords

  • non-photorealistic rendering (NPR)
  • image stylization
  • style transfer
  • portrait
  • evaluation
  • benchmark