Abstract
In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.
Similar content being viewed by others
References
JINAG Ying-feng, ZHANG Hua, XUE Yan-bing, ZHOU Mian, XU Guang-ping and GAO Zan, Journal of Optoelectronics·Laser 27, 224 (2016). (in Chinese)
SUN Jun-ding, LI Hai-hua and JIN Jiao-lin, Journal of Optoelectronics·Laser 28, 441 (2017). (in Chinese)
Krizhevsky Alex, Sutskever Ilya and Hinton Geoffrey, ImageNet Classification with Deep Convolutional Neural Networks, 25th International Conference on Neural Information Processing Systems, 1097 (2012).
J. Mao, W. Xu, Y. Yang, J. Wang, Z Huang and A Yuille, Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), arXiv:1412.6632, 2014.
K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel and Y. Bengio, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044, 2015.
O. Vinyals, A. Toshev, S. Bengio and D. Erhan, Show and Tell: A Neural Image Caption Generator, arXiv: 1411.4555, 2015.
Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent and Rabinovich Andrew, Going Deeper with Convolutions, arXiv:1409.4842, 2014.
K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556, 2014.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho and Yoshua Bengio, Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv:1412.3555, 2014.
Devlin Jacob, Gupta Saurabh, Girshick Ross, Mitchell Margaret and Zitnick C Lawrence, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467, 2015.
T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar and C. L. Zitnick, Microsoft COCO: Common Objects in Context, European Conference on Computer Vision, 740 (2014).
Chen X., Fang H., Lin T. Y., Vedantam R., Gupta S., Dollar P. and Zitnick C. L., Microsoft COCO Captions: Data Collection and Evaluation Server, arXiv:1504.00325, 2015.
Rashtchian C., Young P., Hodosh M. and Hockenmaier J., Collecting Image Captions Using Amazon’s Mechanical Turk, NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 139 (2010).
P. Kishore, R. Salim, W. Todd and W. Zhu, A Method for Automatic Evaluation of Machine Translation, 40th Annual Meeting on Association for Computational Linguistics, 311 (2002).
R. Vedantam, C. L. Zitnick and D. Parikh, CIDEr: Consensus-Based Image Description Evaluation, IEEE Conference on Computer Vision and Pattern Recognition, 4566 (2015).
Chin-Yew Lin, Rouge: A package for automatic evaluation of summaries, Proceedings of the Workshop on Text Summarization Branches Out, 2004.
Satanjeev Banerjee and Alon Lavie, METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments, ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65 (2005).
X. Jia, E. Gavves, B. Fernando and T. Tuytelaars, Guiding the Long-Short Term Memory Model for Image Caption Generation, IEEE International Conference on Computer Vision, 2407 (2015).
Author information
Authors and Affiliations
Corresponding author
Additional information
This work has been supported by the Innovative Application and Research Project of Guangdong Province (No.2016KZDXM013), and the Science & Technology Project of Shantou City (No.A201400150). This paper was presented in part at the CCF Chinese Conference on Computer Vision, Tianjin, 2017. This paper was recommended by the program committee.
Rights and permissions
About this article
Cite this article
Wu, J., Xie, Sy., Shi, Xb. et al. Global-local feature attention network with reranking strategy for image caption generation. Optoelectron. Lett. 13, 448–451 (2017). https://doi.org/10.1007/s11801-017-7185-4
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11801-017-7185-4