Skip to main content
Log in

AniCode: authoring coded artifacts for network-free personalized animations

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Time-based media are used in applications ranging from demonstrating the operation of home appliances to explaining new scientific discoveries. However, creating effective time-based media is challenging. We introduce a new framework for authoring and consuming time-based media. An author encodes an animation in a printed code and affixes the code to an object. A consumer captures an image of the object through a mobile application, and the image together with the code is used to generate a video on their local device. Our system is designed to be low cost and easy to use. By not requiring an Internet connection to deliver the animation, the framework enhances privacy of the communication. By requiring the user to have a direct line-of-sight view of the object, the framework provides personalized animations that only decode in the intended context. Animation schemes in the system include 2D and 3D geometric transformations, color transformation, and annotation. We demonstrate the new framework with sample applications from a wide range of domains. We evaluate the ease of use and effectiveness of our system with a user study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

    Article  Google Scholar 

  2. Agrawala, M., Li, W., Berthouzoz, F.: Design principles for visual communication. Commun. ACM 54(4), 60–69 (2011)

    Article  Google Scholar 

  3. Agrawala, M., Phan, D., Heiser, J., Haymaker, J., Klingner, J., Hanrahan, P., Tversky, B.: Designing effective step-by-step assembly instructions. ACM Trans. Gr. 22(3), 828–837 (2003)

    Article  Google Scholar 

  4. Andolina, S., Pirrone, D., Russo, G., Sorce, S., Gentile, A.: Exploitation of Mobile Access to Context-Based Information in Cultural Heritage Fruition. In: Proceedings of the International Conference on Broadband, Wireless Computing, Communication and Applications, pp. 322–328. IEEE (2012)

  5. Appiah, O.: Rich media, poor media: The impact of audio/video vs. text/picture testimonial ads on browsers’ evaluations of commercial web sites and online products. J. Curr. Issues Res. Advert. 28(1), 73–86 (2006)

    Article  Google Scholar 

  6. Ashok, A.: Design, modeling, and analysis of visual mimo communication. Ph.D. thesis, Rutgers The State University of New Jersey-New Brunswick (2014)

  7. Badam, S.K., Elmqvist, N.: Visfer: camera-based visual data transfer for cross-device visualization. Inf. Vis. 18(1), 68–93 (2019)

    Article  Google Scholar 

  8. Barak, M., Ashkar, T., Dori, Y.J.: Teaching science via animated movies: its effect on students’ thinking and motivation. Comput. Edu. 56(3), 839–846 (2011)

    Article  Google Scholar 

  9. Van den Bergh, M., Boix, X., Roig, G., de Capitani, B., Van Gool, L.: Seeds: Superpixels extracted via energy-driven sampling. In: Proceedings of the European Conference on Computer Vision, pp. 13–26. Springer (2012)

  10. Carter, S., Cooper, M., Adcock, J., Branham, S.: Tools to support expository video capture and access. Educ. Inf. Technol. 19(3), 637–654 (2014)

    Article  Google Scholar 

  11. Carter, S., Qvarfordt, P., Cooper, M., Mäkelä, V.: Creating tutorials with web-based authoring and heads-up capture. IEEE Pervasive Comput. 14(3), 44–52 (2015)

    Article  Google Scholar 

  12. Chang, C.S., Chu, H.K., Mitra, N.J.: Interactive videos: plausible video editing using sparse structure points. Comput. Gr. Forum 35(2), 489–500 (2016)

    Article  Google Scholar 

  13. Cho, N.H., Wu, Q., Xu, J., Zhang, J.: Content authoring using single image in urban environments for augmented reality. In: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, pp. 1–7. IEEE (2016)

  14. Chu, J., Bryan, C., Shih, M., Ferrer, L., Ma, K.L.: Navigable videos for presenting scientific data on affordable head-mounted displays. In: Proceedings of the ACM Conference on Multimedia Systems, pp. 250–260. ACM (2017)

  15. Clarine, B.: 11 reasons why video is better than any other medium. http://www.advancedwebranking.com/blog/11-reasons-why-video-is-better (2016)

  16. Feiner, S.K., McKeown, K.R.: Automating the generation of coordinated multimedia explanations. Computer 24(10), 33–41 (1991)

    Article  Google Scholar 

  17. Fidas, C., Sintoris, C., Yiannoutsou, N., Avouris, N.: A survey on tools for end user authoring of mobile applications for cultural heritage. In: Proceedings of the International Conference on Information, Intelligence, Systems and Applications, pp. 1–5 (2015)

  18. Hern, A.: Fitness tracking app strava gives away location of secret us army bases. The Guardian (2018). https://www.theguardian.com/world/2018/jan/28/fitness-tracking-app-gives-away-location-of-secret-us-army-bases

  19. Kaplan, A.M., Haenlein, M.: Users of the world, unite! the challenges and opportunities of social media. Bus. Horizons 53(1), 59–68 (2010)

    Article  Google Scholar 

  20. Karat, C.M., Pinhanez, C., Karat, J., Arora, R., Vergo, J.: Less clicking, more watching: Results of the iterative design and evaluation of entertaining web experiences. In: Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction, pp. 455–463 (2001)

  21. Li, D., Nair, A.S., Nayar, S.K., Zheng, C.: AirCode: Unobtrusive physical tags for digital fabrication. In: Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 449–460. ACM (2017)

  22. Li, Z., Chen, J.: Superpixel segmentation using linear spectral clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1356–1363 (2015)

  23. Liao, I., Hsu, W.H., Ma, K.L.: Storytelling via navigation: A novel approach to animation for scientific visualization. In: M. Christie, T.Y. Li (eds.) Proceedings of the International Symposium on Smart Graphics, pp. 1–14. Springer International Publishing, Cham (2014)

  24. Lin, S.S., Hu, M.C., Lee, C.H., Lee, T.Y.: Efficient qr code beautification with high quality visual content. IEEE Trans. Multimed. 17(9), 1515–1524 (2015)

    Article  Google Scholar 

  25. McKercher, B., Du Cros, H.: Cultural Tourism: The Partnership Between Tourism and Cultural Heritage Management. Routledge, Abingdon (2002)

    Google Scholar 

  26. OpenCV team: OpenCV for Android SDK. https://opencv.org/platforms/android (2017)

  27. Owen, S., Switkin, D., team, Z.: ZXing barcode scanning library. https://github.com/zxing/zxing (2017)

  28. Parent, R.: Computer Animation: Algorithms and Techniques. Newnes, Oxford (2012)

    Google Scholar 

  29. Revell, T.: App creates augmented-reality tutorials from normal videos. New Scientist (2017). https://www.newscientist.com/article/2146850-app-creates-augmented-reality-tutorials-from-normal-videos

  30. Schnotz, W.: Commentary: towards an integrated view of learning from text and visual displays. Educ. Psychol. Rev. 14(1), 101–120 (2002)

    Article  Google Scholar 

  31. Telea, A.: An image inpainting technique based on the fast marching method. J. Gr. Tools 9(1), 23–34 (2004)

    Article  Google Scholar 

  32. Upson, C., Faulhaber Jr., T., Kamins, D., Laidlaw, D., Schlegel, D., Vroom, J., Gurwitz, R., Van Dam, A.: The application visualization system: a computational environment for scientific visualization. IEEE Comput. Gr. Appl. 9(4), 30–42 (1989)

    Article  Google Scholar 

  33. Wouters, P., Paas, F., van Merriënboer, J.J.: How to optimize learning from animated models: a review of guidelines based on cognitive load. Rev. Educ. Res. 78(3), 645–675 (2008)

    Article  Google Scholar 

  34. Xiao, C., Zhang, C., Zheng, C.: FontCode: Embedding information in text documents using glyph perturbation. ACM Trans. Gr. 37(2), 15 (2018)

    Article  Google Scholar 

  35. Yeshurun, Y., Carrasco, M.: Attention improves or impairs visual performance by enhancing spatial resolution. Nature 396(6706), 72–75 (1998)

    Article  Google Scholar 

  36. Yuan, W., Dana, K., Varga, M., Ashok, A., Gruteser, M., Mandayam, N.: Computer vision methods for visual mimo optical system. In: CVPR Workshops, pp. 37–43 (2011)

  37. Yue, Y.T., Yang, Y.L., Ren, G., Wang, W.: SceneCtrl: Mixed reality enhancement via efficient scene editing. In: Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 427–436. ACM (2017)

  38. Zheng, Y., Chen, X., Cheng, M.M., Zhou, K., Hu, S.M., Mitra, N.J.: Interactive images: cuboid proxies for smart image manipulation. ACM Trans. Gr. 31(4), 99 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiyu Qiu.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 347410 KB)

Supplementary material 2 (mp4 338771 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Qiu, S., Chen, Q. et al. AniCode: authoring coded artifacts for network-free personalized animations. Vis Comput 35, 885–897 (2019). https://doi.org/10.1007/s00371-019-01681-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-019-01681-y

Keywords

Navigation