Abstract
Video captioning is a crucial task for video understanding and has attracted much attention recently. Regions-of-Interest (ROI) of video always contains the most interesting information for the audience. Different from the ROI of images, the ROI of videos has the property of temporally-continuity (e.g. a moving object, or an action in video clips), which is the focus of people’s attention. Inspired by this insight we propose an approach to automatically trace the Spatial-Temporal Saliency content for video captioning by catching the temporal structure of ROI candidates. To this aim, we employ a set of modules named tracing LSTMs, each of which traces a single ROI candidate of feature maps across the entire video. The temporal structure of global features and ROI features are combined to obtain a rough understanding of video content and information of ROI, which is set as the initial states of the decoder to generate captions. We verify the effectiveness of our method on the public benchmark: the Microsoft Video Description Corpus (MSVD). The experimental results demonstrate that catching temporal ROI information by tracing LSTMs enhances the representation of input videos and achieves the state-of-the-art results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Yang, Z., Han, Y., Wang, Z.: Catching the temporal regions-of-interest for video captioning. In: Proceedings of the 2017 ACM on Multimedia Conference. ACM (2017)
Yu, Y., et al.: End-to-end concept word detection for video captioning, retrieval, and question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Baraldi, L., Grana, C., Cucchiara, R.: Hierarchical boundary-aware neural encoder for video captioning. In: CVPR (2017)
Pan, P., et al.: Hierarchical recurrent neural encoder for video representation with application to captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Yu, H., et al.: Video paragraph captioning using hierarchical recurrent neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Gao, L., et al.: Video captioning with attention-based lstm and semantic consistency. IEEE Trans. Multimedia 19(9), 2045–2055 (2017)
Yao, L., et al.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
Venugopalan, S., et al.: Sequence to sequence-video to text. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: a matlab-like environment for machine learning. BigLearn, NIPS Workshop. No. EPFL-CONF-192376 (2011)
Chen, X., et al.: Microsoft COCO captions: data collection and evaluation server (2015). arXiv preprint arXiv:1504.00325
Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation (2014)
Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: consensus-based image description evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Papineni, K., et al.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics (2002)
Vinyals, O., et al.: Show and tell: a neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems (2014)
Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning (2015)
You, Q., et al.: Image captioning with semantic attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Pan, Y., et al.: Jointly modeling embedding and translation to bridge video and language. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. In: NAACL-HLT (2015)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR (2014)
Guadarrama, S., et al.: Youtube2text: recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In: Proceedings of the IEEE International Conference on Computer Vision (2013)
Rohrbach, M., et al.: Translating video content to natural language descriptions. In: Proceedings of the IEEE International Conference on Computer Vision (2013)
Kulkarni, G., et al.: Baby talk: understanding and generating image descriptions. In: Proceedings of the 24th CVPR (2011)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Dong, J., et al.: Early embedding and late reranking for video captioning. In: Proceedings of the 2016 ACM on Multimedia Conference. ACM (2016)
Chen, L., et al.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In: CVPR (2017)
Fang, H., et al.: From captions to visual concepts and back. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Chen, D.L., Dolan, W.B.: Collecting highly parallel data for paraphrase evaluation. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 190–200. Association for Computational Linguistics (2011)
Yang, Z., et al. Review networks for caption generation. In: Advances in Neural Information Processing Systems (2016)
Lu, J., et al.: Knowing when to look: adaptive attention via a visual sentinel for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 6 (2017)
Bin, Y., et al. Adaptively attending to visual attributes and linguistic knowledge for captioning. In: Proceedings of the 2017 ACM on Multimedia Conference. ACM (2017)
Acknowledgment
We would like to thank Ziwei Yang, who is one of the authors of [1], for providing us with the source code and preprocessed dataset.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, Y., Hu, Z., Liu, X., Wang, M. (2018). Video Captioning Based on the Spatial-Temporal Saliency Tracing. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11164. Springer, Cham. https://doi.org/10.1007/978-3-030-00776-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-00776-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00775-1
Online ISBN: 978-3-030-00776-8
eBook Packages: Computer ScienceComputer Science (R0)