Abstract
With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
YouTube: http://www.youtube.com
Overy.TV: http://www.overlay.tv
INNOVID: http://www.innovid.com
Yu, X., Jiang, N., Cheong, L., Leong, H., Yan, X.: Automatic camera calibration of broadcast tennis video with applications to 3D virtual content insertion and ball detection and tracking. Comput. Vis. Image Underst. 113(5), 643â652 (2009)
Wan, K., Yan, X.: Advertising insertion in sports webcasts. IEEE Trans. Multimed. 14(2), 78-8-2â8 (2007)
Lai, J.-H., Chien, S.-Y.: Tennis video enrichment with content layer separation and real-time rendering in sprite plane. In: Proc. 10th IEEE Workshop on Multimedia Signal Processing (MMSPâ08), pp. 672â675 (2008)
Liu, H., Jiang, S., Huang, Q., Xu, C.: A generic virtual content insertion system based on visual attention analysis. In: Proc. 16th ACM Int. Conf. Multimedia (MMâ08), pp. 379â388 (2008)
Chang, C.-H., Hsieh, K.-Y., Chung, M.-C., Wu, J.-L.: ViSA: virtual spotlighted advertising. In: Proc. 16th ACM Int. Conf. Multimedia (MMâ08), pp. 837â840 (2008)
Chang, C.-H., Chiang, M.-C., Wu, J.-L.: Evolving virtual contents with interactions in videos. In: Proc. 1st ACM Int. Workshop on Interactive Multimedia for Consumer Electronics (IMCEâ09), pp. 97â104 (2009)
Chiang, M.-C., Chang, C.-H., Wu, J.-L.: Evolution-based virtual content insertion. In: Proc. 17th ACM Int. Conf. Multimedia (MMâ09), pp. 995â996 (2009)
Mccoy, S., Everard, A., Polak, P., Galletta, D.F.: The effects of online advertising. Commun. ACM 50(3), 84â88 (2007)
Li, H., Edwards, S., Lee, J.: Measuring the intrusiveness of advertisements: scale development and validation. J. Advert. 31(2), 37â47 (2002)
Beauchemin, S.S., Barron, J.L.: The computation of optical flow. ACM Comput. Surv. 27(3), 433â466 (1995)
Shi, J., Tomasi, C.: Good features to track. In: Proc. Computer Vision and Pattern Recognition, pp. 539â600 (1994)
Deng, Y., Manjunath, B.S.: Unsupervised segmentation of color-texture regions in images and video. IEEE Trans. Pattern Anal. Mach. Intell. 23(8), 800â810 (2001)
Cheng, W.-H., Wang, C.-W., Wu, J.-L.: Video adaptation for small display based on content recomposition. IEEE Trans. Circuits Syst. Video Technol. 17(1), 43â58 (2007)
Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Netw. 19, 1395â1407 (2006)
Ma, Y., Hua, X., Lu, L., Zhang, H.: A generic framework of user attention model and its application in video summarization. IEEE Trans. Multimed. 7(5), 907â919 (2005)
Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T., Xu, Y.: Color harmonization. ACM Trans. Graph. 25(3), 624â630 (2006)
Wolberg, G.: Image morphing: a survey. Computer 14, 360â372 (1998)
Patwardhan, K.A., Sapiro, G., Bertalmio, M.: Video inpainting under constrained camera motion. IEEE Trans. Image Process. 16(2), 545â553 (2007)
Chang, C.-H., Lin, Y.-T., Wu, J.-L.: Adaptive video learning by the interactive e-partner. In: Proc. 3rd International Conference on Digital Game and Intelligent Toy Enhanced Learning (DIGITELâ10), pp. 207â209 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Âİ 2010 Springer-Verlag London Limited
About this chapter
Cite this chapter
Chang, CH., Wu, JL. (2010). Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos. In: Shao, L., Shan, C., Luo, J., Etoh, M. (eds) Multimedia Interaction and Intelligent User Interfaces. Advances in Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-84996-507-1_7
Download citation
DOI: https://doi.org/10.1007/978-1-84996-507-1_7
Publisher Name: Springer, London
Print ISBN: 978-1-84996-506-4
Online ISBN: 978-1-84996-507-1
eBook Packages: Computer ScienceComputer Science (R0)