Abstract
Getting an accurate image description has been one of the most discussed topics in the field of Artificial Intelligence. Numerous models/techniques have been developed in the past few years which makes it difficult to trace the exact path of Image Captioning models. This paper attempts its best to give the reader a clear idea of the evolution in the field of image captioning research elaborating on both traditional procedures and the advancements made with the aid of deep learning. This paper is aimed to discuss methods in detail and understand the very essence of depth and logic behind. It also relies upon the ravishing brilliance proceeded by the forthcoming authors. Further it shreds luminance on how the idea can grow in the near and long future.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: NIPS, pp 1106–1114
Alom MZ, Taha TM, Yakopcic C, Westberg S, Sidike P, Nasrin MS, Van Essen BC, Awwal AAS, Asari VK Topbots: the history began from AlexNet: a comprehensive survey on deep learning approaches
Topbots.com/important-cnn-architectures/
Image-net.org/challenges/LSVRC/
Towardsdatascience.com/the-w3h-of-alexnet-vggnet-resnet-and-inception
Xu K, Ba JL, Kiros R, Cho K, Courville A Show, attend and tell: neural image caption generation with visual attention
Medium.com/image-captioning-using-attention-mechanism
Bergstra J, Breuleux O, Bastien F, Lamblin P, Pascanu R, Desjardins G, Turian J, Warde-Farley D, Bengio, Y (2010) Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for scientific computing conference (SciPy), 2010
Farhadi A, Hejrati M, Sadeghi MA, Young P, Rashtchian C, Hockenmaier J, Forsyth D (2010) Every picture tells a story: generating sentences from images. Springer, Berlin, Heidelberg
Hodosh M, Young P, Hockenmaier J (2013) Framing image description as a ranking task: data, models and evaluation metrics. J Artif Intell Res
Graff D, Cieri C (2011) English gigaword: linguistic data consortium. LDC2003T05
Kulkarni G, Premraj V, Dhar S, Li S, Choi Y, Berg AC, Berg TL (2011) Baby talk: understanding and generating simple image descriptions. In: 24th IEEE conference on CVPR 2011, vol 18, Colorado Springs, USA. IEEE, pp 1601–1608
Bernardi R, Cakici R, Elliott D, Erdem A, Erdem E, Ikizler N, Keller F, Muscat A, Plank B (2016) Automatic description generation from images: a survey of models, datasets, and evaluation measures. J Artif Intell Res
Dognin P, Melnyk I, Mroueh Y, Ross J, Sercu T Adversarial semantic alignment for improved image captions. IBM Research, Yorktown Heights, NY
Myung IJ (2003) Tutorial on maximum likelihood estimation. J Math Psychol 47:90–100
Rennie SJ, Marcheret E, Mrouch Y, Ross J, Goel V (2017). Self-critical sequence training for image captioning. In: 2017 IEEE conference on computer vision and pattern cognition (CVPR), Honolulu, USA. IEEE, pp 1179–1195
Gu J, Cai J, Chen T Stack-captioning: coarse-to-fine learning for image captioning. Nanyang Technological University, Gang Wang Alibaba AI Labs
Ushiku Y, Yamaguchi M, Mukuta Y, Harada T (2015) Common subspace for model and similarity: phrase learning for caption generation from images. In: 2015 IEEE international conference on computer vision (ICCV). IEEE, pp 2668–2676
Liu S, Bai L, Hu Y, Wang H Image captioning based on deep neural networks. College of Systems Engineering, National University of Defense Technology, Changsha, China
Mitchell M, Dodge J, Goyal A, Yamaguchi K, Stratos K, Han X, Mensch A, Berg AC, Berg TL, Daume III H (2012) Midge: generating image descriptions from computer vision detections. In: Proceedings of the 13th conference of the European chapter of the Association for Computational Linguistics, pp 747–756
Li S, Kulkarni G, Berg T, Berg A, Choi Y (2011) Composing simple image descriptions using web-scale n-grams. In: Proceedings of the 15th conference on computational natural language learning (CONLL 2011), Portland, USA. Association for Computational Linguistics, pp 220–228
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Banerjee, T., Sharma, A., Charvi, K., Raman, S., Regalla, R.G., Sindhupriya, T. (2022). Journey of Letters to Vectors Through Neural Networks. In: Gupta, D., Polkowski, Z., Khanna, A., Bhattacharyya, S., Castillo, O. (eds) Proceedings of Data Analytics and Management . Lecture Notes on Data Engineering and Communications Technologies, vol 90. Springer, Singapore. https://doi.org/10.1007/978-981-16-6289-8_58
Download citation
DOI: https://doi.org/10.1007/978-981-16-6289-8_58
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-6288-1
Online ISBN: 978-981-16-6289-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)