Image and Video-Based Artistic Stylisation pp 257-284

Part of the Computational Imaging and Vision book series (CIVI, volume 42) | Cite as

Temporally Coherent Video Stylization

Chapter

Abstract

The transformation of video clips into stylized animations remains an active research topic in Computer Graphics. A key challenge is to reproduce the look of traditional artistic styles whilst minimizing distracting flickering and sliding artifacts; i.e. with temporal coherence. This chapter surveys the spectrum of available video stylization techniques, focusing on algorithms encouraging the temporally coherent placement of rendering marks, and discusses the trade-offs necessary to achieve coherence. We begin with flow-based adaptations of stroke based rendering (SBR) and texture advection capable of painting video. We then chart the development of the field, and its fusion with Computer Vision, to deliver coherent mid-level scene representations. These representations enable the rotoscoping of rendering marks on to temporally coherent video regions, enhancing the diversity and temporal coherence of stylization. In discussing coherence, we formalize the problem of temporal coherence in terms of three defined criteria, and compare and contrast video stylization using these.

References

  1. 1.
    Agarwala, A., Hertzmann, A., Salesin, D.H., Seitz, S.M.: Keyframe-based tracking for rotoscoping and animation. ACM Trans. Graph. 23, 584–591 (2004) CrossRefGoogle Scholar
  2. 2.
    Bai, X., Wang, J., Simons, D., Sapiro, G.: Video SnapCut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28(3), 70 (2009) CrossRefGoogle Scholar
  3. 3.
    Bangham, J.A., Gibson, S.E., Harvey, R.: The art of scale-space. In: Proc. BMVC, pp. 569–578 (2003) Google Scholar
  4. 4.
    Beauchemin, S.S., Barron, J.L.: The computation of optical flow. ACM Comput. Surv. 27(3), 433–466 (1995) CrossRefGoogle Scholar
  5. 5.
    Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002) CrossRefGoogle Scholar
  6. 6.
    Bénard, P., Thollot, J., Sillion, F.: Quality assessment of fractalized NPR textures: a perceptual objective metric. In: Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization, Chania, Greece, pp. 117–120. ACM, New York (2009) CrossRefGoogle Scholar
  7. 7.
    Bénard, P., Cole, F., Golovinskiy, A., Finkelstein, A.: Self-similar texture for coherent line stylization. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, Annecy, France, p. 91. ACM, New York (2010) Google Scholar
  8. 8.
    Bénard, P., Lagae, A., Vangorp, P., Lefebvre, S., Drettakis, G., Thollot, J.: A dynamic noise primitive for coherent stylization. Comput. Graph. Forum 29(4), 1497–1506 (2010) CrossRefGoogle Scholar
  9. 9.
    Bénard, P., Bousseau, A., Thollot, J.: Temporal coherence for stylized animation. Comput. Graph. Forum 30(8), 2367–2386 (2012) CrossRefGoogle Scholar
  10. 10.
    Bousseau, A., Kaplan, M., Thollot, J., Sillion, F.X.: Interactive watercolor rendering with temporal coherence and abstraction. In: Proc. NPAR, pp. 141–149 (2006) Google Scholar
  11. 11.
    Bousseau, A., Neyret, F., Thollot, J., Salesin, D.: Video watercolorization using bidirectional texture advection. ACM Trans. Graph. 26(3), 104 (2007) CrossRefGoogle Scholar
  12. 12.
    Chenney, S., Pingel, M., Iverson, R., Szymanski, M.: Simulating cartoon style animation. In: Proc. NPAR, pp. 133–138 (2002) Google Scholar
  13. 13.
    Collomosse, J.P., Hall, P.M.: Video motion analysis for the synthesis of dynamic cues and futurist art. Graph. Models 68(5–6) 402–414 (2006) CrossRefGoogle Scholar
  14. 14.
    Collomosse, J., Rowntree, D., Hall, P.M.: Stroke surfaces: a spatio-temporal framework for temporally coherent nonphotorealistic animations. Tech. Rep. CSBU-2003-01, University of Bath, UK (2003). http://opus.bath.ac.uk/16858/
  15. 15.
    Collomosse, J., Rowntree, D., Hall, P.M.: Video analysis for cartoon-style special effects. In: Proc. BMVC, pp. 749–758 (2003) Google Scholar
  16. 16.
    Collomosse, J., Rowntree, D., Hall, P.M.: Stroke surfaces: temporally coherent non-photorealistic animations from video. IEEE Trans. Vis. Comput. Graph. 11(5), 540–549 (2005) CrossRefGoogle Scholar
  17. 17.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002) CrossRefGoogle Scholar
  18. 18.
    Criminisi, A., Sharp, T., Rother, C., Pérez, P.: Geodesic image and video editing. ACM Trans. Graph. 29(5), 134 (2010) CrossRefGoogle Scholar
  19. 19.
    Dalal, K., Klein, A.W., Liu, Y., Smith, K.: A spectral approach to NPR packing. In: Proceedings of the 4th International Symposium on Non-photorealistic Animation and Rendering, pp. 71–78. ACM, New York (2006) CrossRefGoogle Scholar
  20. 20.
    DeCarlo, D., Santella, A.: Stylization and abstraction of photographs. In: Proc. SIGGRAPH, pp. 769–776 (2002) Google Scholar
  21. 21.
    Fukunaga, K., Hostetler, L.: The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory 21, 32–40 (1975) MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Goldman, D.B., Curless, B., Salesin, D., Seitz, S.M.: Schematic storyboarding for video visualization and editing. ACM Trans. Graph. 25(3), 862–871 (2006) CrossRefGoogle Scholar
  23. 23.
    Green, S., Salesin, D., Schofield, S., Hertzmann, A., Litwinowicz, P., Gooch, A., Curtis, C., Gooch, B.: Non-photorealistic rendering. In: SIGGRAPH Courses (1999) Google Scholar
  24. 24.
    Haeberli, P.: Paint by numbers: abstract image representations. In: Proc. SIGGRAPH, pp. 207–214 (1990) Google Scholar
  25. 25.
    Hays, J., Essa, I.: Image and video based painterly animation. In: Proc. NPAR, pp. 113–120 (2004) CrossRefGoogle Scholar
  26. 26.
    Hertzmann, A.: Painterly rendering with curved brush strokes of multiple sizes. In: Proc. SIGGRAPH, pp. 453–460 (1998) Google Scholar
  27. 27.
    Hertzmann, A.: Paint by relaxation. In: Computer Graphics International, pp. 47–54. IEEE Comput. Soc., Hong Kong (2001) Google Scholar
  28. 28.
    Hertzmann, A., Perlin, K.: Painterly rendering for video and interaction. In: Proc. NPAR, pp. 7–12 (2000) CrossRefGoogle Scholar
  29. 29.
    Hsu, S.C., Lee, I.H.H., Wiseman, N.E.: Skeletal strokes. In: Proc. UIST, pp. 197–206 (1993). doi:10.1145/168642.168662 Google Scholar
  30. 30.
    Kagaya, M., Brendel, W., Deng, Q., Kesterson, T., Todorovic, S., Neill, P.J., Zhang, E.: Video painting with space-time-varying style parameters. IEEE Trans. Vis. Comput. Graph. 17(1), 74–87 (2011) CrossRefGoogle Scholar
  31. 31.
    Kalnins, R.D., Markosian, L., Meier, B.J., Kowalski, M.A., Lee, J.C., Davidson, P.L., Webb, M., Hughes, J.F., Finkelstein, A.: WYSIWYG NPR: drawing strokes directly on 3D models. In: Proceedings of SIGGRAPH 2002, San Antonio, USA, vol. 21, p. 755. ACM, New York (2002) CrossRefGoogle Scholar
  32. 32.
    Kass, M., Pesare, D.: Coherent noise for non-photorealistic rendering. ACM Trans. Graph. 30, 30 (2011) CrossRefGoogle Scholar
  33. 33.
    Klein, A.W., Li, W., Kazhdan, M.M., Corrêa, W.T., Finkelstein, A., Funkhouser, T.A.: Non-photorealistic virtual environments. In: Proceedings of SIGGRAPH 2000, New Orleans, USA, pp. 527–534. ACM, New York (2000) CrossRefGoogle Scholar
  34. 34.
    Kopf, J., Cohen-Or, D., Deussen, O., Lischinski, D.: Recursive Wang tiles for real-time blue noise. ACM Trans. Graph. 25(3), 509–518 (2006) CrossRefGoogle Scholar
  35. 35.
    Kyprianidis, J.E.: Image and video abstraction by multi-scale anisotropic Kuwahara filtering. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, pp. 55–64. ACM, New York (2011) CrossRefGoogle Scholar
  36. 36.
    Kyprianidis, J.E., Kang, H.: Image and video abstraction by coherence-enhancing filtering. Comput. Graph. Forum 30(2), 593–602 (2011) CrossRefGoogle Scholar
  37. 37.
    Lagae, A., Dutré, P.: A procedural object distribution function. ACM Trans. Graph. 24(4), 1442–1461 (2005) CrossRefGoogle Scholar
  38. 38.
    Lasseter, J.: Principles of traditional animation applied to 3D computer animation. In: Proc. SIGGRAPH, vol. 21, pp. 35–44 (1987) Google Scholar
  39. 39.
    Lin, L., Zeng, K., Lv, H., Wang, Y., Xu, Y., Zhu, S.C.: Painterly animation using video semantics and feature correspondence. In: Proc. NPAR, pp. 73–80 (2010) Google Scholar
  40. 40.
    Litwinowicz, P.: Processing images and video for an impressionist effect. In: Proceedings of SIGGRAPH, Los Angeles, USA, vol. 97, pp. 407–414. ACM, New York (1997) CrossRefGoogle Scholar
  41. 41.
    Liu, C., Torralba, A., Freeman, W., Durand, F., Adelson, E.H.: Motion magnification. ACM Trans. Graph. 24(3), 519–526 (2005) CrossRefGoogle Scholar
  42. 42.
    Lu, J., Sander, P.V., Finkelstein, A.: Interactive painterly stylization of images, videos and 3D animations. In: Proceedings of the 2010 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Washington, USA, vol. 26, pp. 127–134. ACM, New York (2010) Google Scholar
  43. 43.
    Meier, B.J.: Painterly rendering for animation. In: Proc. SIGGRAPH, pp. 477–484 (1996). doi:10.1145/237170.237288. dl.acm.org/citation.cfm?id=237288 Google Scholar
  44. 44.
    Neyret, F.: Advected Textures. In: Proceedings of Eurographics/SIGGRAPH Symposium on Computer Animation, pp. 147–153. Eurographics Association, San Diego (2003) Google Scholar
  45. 45.
    O’Donovan, P., Hertzmann, A.: AniPaint: interactive painterly animation from video. IEEE Trans. Vis. Comput. Graph. 18(3), 475–487 (2012) CrossRefGoogle Scholar
  46. 46.
    Perez, P., Gangnet, A., Blake, A.: Poisson image editing. In: Proc. ACM SIGGRAPH, pp. 313–318 (2003) Google Scholar
  47. 47.
    Schwarz, M., Stamminger, M.: On predicting visual popping in dynamic scenes. In: Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization, Chania, Greece, p. 93. ACM, New York (2009) CrossRefGoogle Scholar
  48. 48.
    Smith, K., Liu, Y., Klein, A.: Animosaics. In: Proc. SCA, pp. 201–208 (2005) Google Scholar
  49. 49.
    Szirányi, T., Tóth, Z., Figueiredo, M., Zerubia, J., Jain, A.: Optimization of paintbrush rendering of images by dynamic MCMC methods. In: Proc. EMMCVPR, pp. 201–215 (2001) Google Scholar
  50. 50.
    Treavett, S.M.F., Chen, M.: Statistical techniques for the automated synthesis of non-photorealistic images. In: Proc. EGUK, pp. 201–210 (1997) Google Scholar
  51. 51.
    Vanderhaeghe, D., Barla, P., Thollot, J., Sillion, F.: Dynamic point distribution for stroke-based rendering. In: Proceedings of the 18th Eurographics Symposium on Rendering 2007, pp. 139–146. Eurographics Association, Grenoble (2007) Google Scholar
  52. 52.
    Vergne, R., Vanderhaeghe, D., Chen, J., Barla, P., Granier, X., Schlick, C.: Implicit brushes for stylized line-based rendering. Comput. Graph. Forum 30, 513–522 (2011) CrossRefGoogle Scholar
  53. 53.
    Wang, T., Collomosse, J.: Progressive motion diffusion of labeling priors for coherent video segmentation. IEEE Trans. Multimed. 14(2), 389–400 (2012) CrossRefGoogle Scholar
  54. 54.
    Wang, J., Thiesson, B., Xu, Y., Cohen, M.F.: Image and video segmentation by anisotropic kernel mean shift. In: Proc. ECCV, pp. 238–249 (2004). doi:10.1007/978-3-540-24671-8_19 Google Scholar
  55. 55.
    Wang, J., Xu, Y., Shum, H.Y., Cohen, M.F.: Video tooning. ACM Trans. Graph. 23(3), 574 (2004) CrossRefGoogle Scholar
  56. 56.
    Wang, J., Drucker, S.M., Agrawala, M., Cohen, M.F.: The cartoon animation filter. ACM Trans. Graph. 25(3), 1169–1173 (2006) CrossRefGoogle Scholar
  57. 57.
    Wang, T., Collomosse, J., Hu, R., Slatter, D., Greig, D., Cheatle, P.: Stylized ambient displays of digital media collections. Comput. Graph. 35(1), 54–66 (2011). doi:10.1016/j.cag.2010.11.004 CrossRefGoogle Scholar
  58. 58.
    Willats, J., Durand, F.: Defining pictorial style: lessons from linguistics and computer graphics. Axiomathes 15, 319–351 (2005) CrossRefGoogle Scholar
  59. 59.
    Winnemöller, H., Olsen, S., Gooch, B.: Real-time video abstraction. In: Proc. SIGGRAPH, pp. 1221–1226 (2006) Google Scholar
  60. 60.
    Yantis, S., Jonides, J.: Abrupt visual onsets and selective attention: evidence from visual search. J. Exp. Psychol. Hum. Percept. Perform. 10(5), 601–621 (1984) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  • Pierre Bénard
    • 1
  • Joëlle Thollot
    • 3
  • John Collomosse
    • 2
  1. 1.University of TorontoTorontoCanada
  2. 2.Centre for Vision Speech and Signal ProcessingUniversity of SurreyGuildfordUK
  3. 3.LJK, INRIAGrenoble UniversitySaint IsmierFrance

Personalised recommendations