Skip to main content
Log in

Global-local feature attention network with reranking strategy for image caption generation

  • Published:
Optoelectronics Letters Aims and scope Submit manuscript

Abstract

In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. JINAG Ying-feng, ZHANG Hua, XUE Yan-bing, ZHOU Mian, XU Guang-ping and GAO Zan, Journal of Optoelectronics·Laser 27, 224 (2016). (in Chinese)

    Google Scholar 

  2. SUN Jun-ding, LI Hai-hua and JIN Jiao-lin, Journal of Optoelectronics·Laser 28, 441 (2017). (in Chinese)

    Google Scholar 

  3. Krizhevsky Alex, Sutskever Ilya and Hinton Geoffrey, ImageNet Classification with Deep Convolutional Neural Networks, 25th International Conference on Neural Information Processing Systems, 1097 (2012).

    Google Scholar 

  4. J. Mao, W. Xu, Y. Yang, J. Wang, Z Huang and A Yuille, Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), arXiv:1412.6632, 2014.

    Google Scholar 

  5. K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel and Y. Bengio, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044, 2015.

    Google Scholar 

  6. O. Vinyals, A. Toshev, S. Bengio and D. Erhan, Show and Tell: A Neural Image Caption Generator, arXiv: 1411.4555, 2015.

    Google Scholar 

  7. Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent and Rabinovich Andrew, Going Deeper with Convolutions, arXiv:1409.4842, 2014.

    Google Scholar 

  8. K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556, 2014.

    Google Scholar 

  9. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho and Yoshua Bengio, Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv:1412.3555, 2014.

    Google Scholar 

  10. Devlin Jacob, Gupta Saurabh, Girshick Ross, Mitchell Margaret and Zitnick C Lawrence, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467, 2015.

    Google Scholar 

  11. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar and C. L. Zitnick, Microsoft COCO: Common Objects in Context, European Conference on Computer Vision, 740 (2014).

    Google Scholar 

  12. Chen X., Fang H., Lin T. Y., Vedantam R., Gupta S., Dollar P. and Zitnick C. L., Microsoft COCO Captions: Data Collection and Evaluation Server, arXiv:1504.00325, 2015.

    Google Scholar 

  13. Rashtchian C., Young P., Hodosh M. and Hockenmaier J., Collecting Image Captions Using Amazon’s Mechanical Turk, NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 139 (2010).

    Google Scholar 

  14. P. Kishore, R. Salim, W. Todd and W. Zhu, A Method for Automatic Evaluation of Machine Translation, 40th Annual Meeting on Association for Computational Linguistics, 311 (2002).

    Google Scholar 

  15. R. Vedantam, C. L. Zitnick and D. Parikh, CIDEr: Consensus-Based Image Description Evaluation, IEEE Conference on Computer Vision and Pattern Recognition, 4566 (2015).

    Google Scholar 

  16. Chin-Yew Lin, Rouge: A package for automatic evaluation of summaries, Proceedings of the Workshop on Text Summarization Branches Out, 2004.

    Google Scholar 

  17. Satanjeev Banerjee and Alon Lavie, METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments, ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65 (2005).

    Google Scholar 

  18. X. Jia, E. Gavves, B. Fernando and T. Tuytelaars, Guiding the Long-Short Term Memory Model for Image Caption Generation, IEEE International Conference on Computer Vision, 2407 (2015).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yao-wen Chen  (陈耀文).

Additional information

This work has been supported by the Innovative Application and Research Project of Guangdong Province (No.2016KZDXM013), and the Science & Technology Project of Shantou City (No.A201400150). This paper was presented in part at the CCF Chinese Conference on Computer Vision, Tianjin, 2017. This paper was recommended by the program committee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, J., Xie, Sy., Shi, Xb. et al. Global-local feature attention network with reranking strategy for image caption generation. Optoelectron. Lett. 13, 448–451 (2017). https://doi.org/10.1007/s11801-017-7185-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11801-017-7185-4

Document code

Navigation