Advertisement

Animating pictures of water scenes using video retrieval

Abstract

We present a system to quickly and easily create an animation of water scenes in a single image. Our method relies on a database of videos of water scenes and video retrieval technique. Given an input image, alpha masks specifying regions of interest, and sketches specifying flow directions, our system first retrieves appropriate video candidates from the database and create the candidate animations for each region of interest as the composite of the input image and the retrieved videos: this process spends less than one minute by taking advantage of parallel distributed processing. Our system then allows the user to interactively control the speed of the desired animation and select the appropriate animation. After selecting the animation for all the regions, the resulting animation is completed. Finally, the user optionally applies a texture synthesis algorithm to recover the appearance of the input image. We demonstrate that our system allows the user to create a variety of animations of water scenes.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

References

  1. 1.

    Bar-Joseph, Z., El-Yaniv, R., Lischinski, D., Werman, M.: Texture mixing and texture movie synthesis using statistical learning. IEEE Trans. Vis. Comput. Graph. 7(2), 120–135 (2001). doi:10.1109/2945.928165

  2. 2.

    Bhat, K.S., Seitz, S.M., Hodgins, J.K., Khosla, P.K.: Flow-based video synthesis and editing. ACM Trans. Graph. 23(3), 360–363 (2004). doi:10.1145/1015706.1015729

  3. 3.

    Chuang, Y.Y., Goldman, D.B., Zheng, K.C., Curless, B., Salesin, D.H., Szeliski, R.: Animating pictures with stochastic motion textures. In: Proc. SIGGRAPH, vol. 2005, pp. 853–860 (2005)

  4. 4.

    Doretto, G., Chiuso, A., Wu, Y.N., Soatto, S.: Dynamic textures. Int. J. Comput. Vis. 51(2), 91–109 (2003)

  5. 5.

    Gui, Y., Ma, L., Yin, C., Z, C.: Preserving global features of fluid animation from a single image using video examples. J. Zhejian Univ. Sci. C 13(7), 510–519 (2012)

  6. 6.

    Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Trans. Graph. 26(3) (2007). doi:10.1145/1275808.1276382

  7. 7.

    Heeger, D.J., Bergen, J.R.: Pyramid-based texture analysis/synthesis. In: Proc. SIGGRAPH ’95, pp. 229–238 (1995)

  8. 8.

    Hornung, A., Dekkers, E., Kobbelt, L.: Character animation from 2d pictures and 3d motion data. ACM Trans. Graph. 26(1), 1 (2007)

  9. 9.

    Horry, Y., Anjyo, K., Arai, K.: Tour into the picture: using a spidery mesh interface to make animation from a single image. In: Proc. SIGGRAPH ’97, pp. 225–232 (1997)

  10. 10.

    Igarashi, T., Moscovich, T., Hughes, J.F.: As-rigid-as-possible shape manipulation. ACM Trans. Graph. 24(3), 1134–1141 (2005)

  11. 11.

    Joshi, N., Mehta, S., Drucker, S., Stollnitz, E., Hoppe, H., Uyttendaele, M., Cohen, M.: Cliplets: Juxtaposing still and dynamic imagery. In: Proc. of UIST ’12, pp. 251–260 (2012)

  12. 12.

    Kwatra, V., Essa, I., Bobick, A., Kwatra, N.: Texture optimization for example-based synthesis. In: Proc. SIGGRAPH 2005, pp. 795–802 (2005)

  13. 13.

    Laffont, P.Y., Ren, Z., Tao, X., Qian, C., Hays, J.: Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. Graph. 33(4), 149:1–149:11 (2014)

  14. 14.

    Lalonde, J.F., Efros, A.A., Narasimhan, S.G.: Webcam clip art: appearance and illuminant transfer from time-lapse sequences. ACM Trans. Graph. 28(5), 131:1–131:10 (2009)

  15. 15.

    Liao, Z., Joshi, N., Hoppe, H.: Automated video looping with progressive dynamism. ACM Trans. Graph. 32(4), 77:1–77:10 (2013)

  16. 16.

    Lin, Z., Wang, L., Wang, Y., Kang, S.B., Fang, T.: High resolution animated scenes from stills. IEEE Trans. Vis. Comput. Graph. 13(3), 562–568 (2007)

  17. 17.

    Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: Sift flow: dense correspondence across different scenes. In: Proc. of ECCV ’08, pp. 28–42 (2008)

  18. 18.

    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

  19. 19.

    Ma, C., Wei, L.Y., Guo, B., Zhou, K.: Motion field texture synthesis. ACM Trans. Graph. 28(5), 1–8 (2009)

  20. 20.

    Okabe, M., Anjyo, K., Igarashi, T., Seidel, H.P.: Animating pictures of fluid using video examples. Comput. Graph. Forum 28(2), 677–686 (2009)

  21. 21.

    Okabe, M., Anjyo, K., Onai, R.: Creating fluid animation from a single image using video database. Comput. Graph. Forum 30(7), 1973–1982 (2011)

  22. 22.

    Prashnani, E., Noorkami, M., Vaquero, D., Sen, P.: A phase-based approach for animating images using video examples. Comput. Graph. Forum (2016). doi:10.1111/cgf.12940

  23. 23.

    Praun, E., Finkelstein, A., Hoppe, H.: Lapped textures. In: Proc. SIGGRAPH, vol. 2000, pp. 465–470 (2000)

  24. 24.

    Ramanan, D.: Learning to parse images of articulated bodies. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19, pp. 1129–1136. MIT Press (2007). http://papers.nips.cc/paper/2976-learning-to-parse-images-of-articulated-bodies.pdf

  25. 25.

    Schödl, A., Szeliski, R., Salesin, D.H., Essa, I.: Video textures. In: Proc. SIGGRAPH, vol. 2000, pp. 489–498 (2000)

  26. 26.

    Shih, Y., Paris, S., Durand, F., Freeman, W.T.: Data-driven hallucination of different times of day from a single outdoor photo. ACM Trans. Graph. 32(6), 200:1–200:11 (2013)

  27. 27.

    Sun, M., Su, H., Savarese, S., Fei-Fei, L.: A multi-view probabilistic model for 3d object classes. pp. 1247 –1254 (2009)

  28. 28.

    Wang, Y., Zhu, S.C.: Modeling textured motion: Particle, wave and sketch. In: ICCV2003, pp. 213–220 (2003)

  29. 29.

    Wei, L.Y., Levoy, M.: Fast texture synthesis using tree-structured vector quantization. In: Proc. SIGGRAPH, vol. 2000, pp. 479–488 (2000)

  30. 30.

    Werlberger, M., Pock, T., Bischof, H.: Motion estimation with non-local total variation regularization. In: Proc. of CVPR, vol. 2010, pp. 2464–2471 (2010)

  31. 31.

    Xu, X., Wan, L., Liu, X., Wong, T.T., Wang, L., Leung, C.S.: Animating animal motion from still. ACM Trans. Graph. 27(5), 117:1–117:8 (2008)

Download references

Acknowledgements

We would like to thank the anonymous reviewers for their insightful and constructive comments. Many thanks also go to Ayumi Kimura for discussions and encouragements. This work was supported by JSPS KAKENHI Grant Numbers JP15H05924 and JP25730071. This work was supported by Japan Science and Technology Agency, CREST. This work was partially supported by the Joint Research Program (Short-term Collaborative Research) of the Institute of Mathematics for Industry, Kyushu University. Yoshinori Dobashi was partially supported by UEI Research.

Author information

Correspondence to Makoto Okabe.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 222714 KB)

Supplementary material 1 (mp4 222714 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Okabe, M., Dobashi, Y. & Anjyo, K. Animating pictures of water scenes using video retrieval. Vis Comput 34, 347–358 (2018) doi:10.1007/s00371-016-1337-6

Download citation

Keywords

  • Single image
  • Interactive design
  • Video database
  • Video analysis/synthesis
  • Fluid animation
  • Texture analysis/synthesis