What Convnets Make for Image Captioning?

Conference paper

DOI: 10.1007/978-3-319-51811-4_34

Part of the Lecture Notes in Computer Science book series (LNCS, volume 10132)
Cite this paper as:
Liu Y., Guo Y., S. Lew M. (2017) What Convnets Make for Image Captioning?. In: Amsaleg L., Guðmundsson G., Gurrin C., Jónsson B., Satoh S. (eds) MultiMedia Modeling. MMM 2017. Lecture Notes in Computer Science, vol 10132. Springer, Cham

Abstract

Nowadays, a general pipeline for the image captioning task takes advantage of image representations based on convolutional neural networks (CNNs) and sequence modeling based on recurrent neural networks (RNNs). As captioning performance closely depends on the discriminative capacity of CNNs, our work aims to investigate the effects of different Convnets (CNN models) on image captioning. We train three Convnets based on different classification tasks: single-label, multi-label and multi-attribute, and then feed visual representations from these Convnets into a Long Short-Term Memory (LSTM) to model the sequence of words. Since the three Convnets focus on different visual contents in one image, we propose aggregating them together to generate a richer visual representation. Furthermore, during testing, we use an efficient multi-scale augmentation approach based on fully convolutional networks (FCNs). Extensive experiments on the MS COCO dataset provide significant insights into the effects of Convnets. Finally, we achieve comparable results to the state-of-the-art for both caption generation and image-sentence retrieval tasks.

Keywords

Image captioning Convolutional neural networks Aggregation module Long short-term memory Multi-scale testing 

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.LIACS Media LabLeiden UniversityLeidenThe Netherlands

Personalised recommendations