Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 21, pp 30651–30675 | Cite as

Multi-modal multi-concept-based deep neural network for automatic image annotation

  • Haijiao Xu
  • Changqin HuangEmail author
  • Xiaodi Huang
  • Muxiong Huang
Article

Abstract

Automatic Image Annotation (AIA) remains as a challenge in computer vision with real-world applications, due to the semantic gap between high-level semantic concepts and low-level visual appearances. Contextual tags attached to visual images and context semantics among semantic concepts can provide further semantic information to bridge this gap. In order to effectively capture these semantic correlations, we present a novel approach called Multi-modal Multi-concept-based Deep Neural Network (M2-DNN) in this study, which models the correlations of visual images, contextual tags, and multi-concept semantics. Unlike traditional AIA methods, our M2-DNN approach takes into account not only single-concept context semantics, but also multi-concept context semantics with abstract scenes. In our model, a multi-concept such as \(\{``plane",``buildings"\}\) is viewed as one holistic scene concept for concept learning. Specifically, we first construct a multi-modal Deep Neural Network (DNN) as a concept classifier for visual images and contextual tags, and then employ it to annotate unlabeled images. Second, real-world databases commonly include many difficult concepts that are hard to be recognized, such as concepts with similar appearances, concepts with abstract scenes, and rare concepts. To effectively recognize them, we utilize multi-concept semantics inference and multi-modal correlation learning to refine semantic annotations. Finally, we estimate the most relevant labels for each of unlabeled images through a new strategy of label decision. The results of our comprehensive experiments on two publicly available datasets have shown that our method performs favourably compared with several other state-of-the-art methods.

Keywords

Automatic image annotation Deep neural network Multi-concept semantics Machine learning 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61877020), the GDUPS (2015), the CSC (No. 201706755023) and the China Postdoctoral Science Foundation (No. 2016M600657 and 2017T100637).

References

  1. 1.
    Chang CC, Lin CJ (2011) LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27CrossRefGoogle Scholar
  2. 2.
    Chen M, Zheng A, Weinberger KQ (2013) Fast image tagging. In: Proceedings of ACM International Conference on Machine Learning, pp 1274–1282Google Scholar
  3. 3.
    Chu W, Cai D (2018) Deep feature based contextual model for object detection. Neurocomputing 275:1035–1042CrossRefGoogle Scholar
  4. 4.
    Chua TS, Tang J, Hong R, Li H, Luo Z, Zheng Y (2009) NUS-WIDE: a real-world Web image database from National University of Singapore. In: Proceedings of ACM International Conference on Image and Video Retrieval, pp 48–56Google Scholar
  5. 5.
    Gong Y, Jia Y, Leung T, Toshev A, Ioffe S (2014) Deep convolutional ranking for multilabel image annotation. In: Proceedings of International Conference on Learning RepresentationsGoogle Scholar
  6. 6.
    Guillaumin M, Mensink T, Verbeek J, Schmid C (2009) TagProp: discriminative metric learning in nearest neighbor models for image auto-annotation. In: Proceedings of IEEE International Conference on Computer Vision, pp 309–316Google Scholar
  7. 7.
    Guillaumin M, Verbeek J, Schmid C (2010) Multimodal semi-supervised learning for image classification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp 902–909Google Scholar
  8. 8.
    Izadinia H, Russell BC, Farhadi A, Hoffman MD, Hertzmann A (2015) Deep classifiers from image tags in the wild. In: Proceedings of ACM Conference on Multimedia, pp 13–18Google Scholar
  9. 9.
    Kalayeh MM, Idrees H, Shah M (2014) NMF-KNN: image annotation using weighted multi-view non-negative matrix factorization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp 184–191Google Scholar
  10. 10.
    Kim Y (2014) Convolutional neural networks for sentence classification. In: Proceedings of ACL International Conference on Empirical Methods in Natural Language Processing, pp 1746–1751Google Scholar
  11. 11.
    Lai H, Pan Y, Shu X, Wei Y, Yan S (2016) Instance-aware hashing for multi-label image retrieval. IEEE Trans Image Process 25(6):2469–2479MathSciNetCrossRefGoogle Scholar
  12. 12.
    Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444CrossRefGoogle Scholar
  13. 13.
    Li Y, Song Y, Luo J (2017) Improving pairwise ranking for multi-label image classification. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp 426–435Google Scholar
  14. 14.
    Lin M, Chen Q, Yan S (2014) Network In Network. In: Proceedings of International Conference on Learning RepresentationsGoogle Scholar
  15. 15.
    Lin G, Liao K, Sun B, Chen Y, Zhao F (2017) Dynamic graph fusion label propagation for semi-supervised multi-modality classification. Pattern Recogn 68:14–23CrossRefGoogle Scholar
  16. 16.
    Liu W, Tsang IW (2015) Large margin metric learning for multi-label prediction. In: Proceedings of AAAI Conference on Artificial Intelligence, pp 2800–2806Google Scholar
  17. 17.
    Liu Z, Zhang C, Chen C (2018) MMDF-LDA: an improved multi-modal latent Dirichlet allocation model for social image annotation. Expert Syst Appl 104:168–184CrossRefGoogle Scholar
  18. 18.
    Mikolov T, Sutskever I, Chen K, Corrado G, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Proceedings of Advances in Neural Information Processing Systems, pp 3111–3119Google Scholar
  19. 19.
    Nogueira K, Veloso AA, Santos JAD (2016) Pointwise and pairwise clothing annotation: combining features from social media. Multimed Tools Appl 75(7):4083–4113CrossRefGoogle Scholar
  20. 20.
    Nowak S, Nagel K, Liebetrau J (2011) The CLEF 2011 photo annotation and concept-based retrieval tasks. In: Proceedings of CLEF Conference and Labs of the Evaluation Forum, pp 1–25Google Scholar
  21. 21.
    Ren Z, Jin H, Lin Z, Fang C, Yuille A (2015) Multi-instance visual-semantic embedding. arXiv:1512.06963
  22. 22.
    Shu X, Lai D, Xu H, Tao L (2015) Learning shared subspace for multi-label dimensionality reduction via dependence maximization. Neurocomputing 168:356–364CrossRefGoogle Scholar
  23. 23.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. In: Proceedings of International Conference on Learning RepresentationsGoogle Scholar
  24. 24.
    Song Y, Mcduff D, Vasisht D, Kapoor A (2016) Exploiting sparsity and co-occurrence structure for action unit recognition. In: Proceedings of IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp 1–8Google Scholar
  25. 25.
    Srivastava N, Salakhutdinov R (2014) Multimodal learning with deep Boltzmann machines. J Mach Learn Res 15(1):2949–2980MathSciNetzbMATHGoogle Scholar
  26. 26.
    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp 1–9Google Scholar
  27. 27.
    Wang Y, Lin X, Wu L, Zhang W, Zhang Q (2015) LBMCH: learning bridging mapping for cross-modal hashing. In: Proceedings of International ACM SIGIR, pp 999–1002Google Scholar
  28. 28.
    Wang Y, Lin X, Wu L, Zhang W, Zhang Q, Huang X (2015) Robust subspace clustering for multi-view data by exploiting correlation consensus. IEEE Trans Image Process 24(11):3939–3949MathSciNetCrossRefGoogle Scholar
  29. 29.
    Wang Y, Zhang W, Wu L, Lin X, Fang M, Pan S (2016) Iterative views agreement: an iterative low-rank based structured optimization method to multi-view spectral clustering. In: Proceedings of International Joint Conference on Artificial Intelligence, pp 2153–2159Google Scholar
  30. 30.
    Wang Y, Zhang W, Wu L, Lin X, Zhao X (2017) Unsupervised metric fusion over multiview data by graph random walk-based cross-view diffusion. IEEE Trans Neural Netw Learn Syst 28(1):57–70CrossRefGoogle Scholar
  31. 31.
    Wang Y, Lin X, Wu L, Zhang W (2017) Effective multi-query expansions: collaborative deep networks for robust landmark retrieval. IEEE Trans Image Process 26(3):1393–1404MathSciNetCrossRefGoogle Scholar
  32. 32.
    Wang Y, Wu L, Lin X, Gao J (2018) Multiview spectral clustering via structured low-rank matrix factorization. IEEE Transactions on Neural Networks and Learning Systems,  https://doi.org/10.1109/TNNLS.2017.2777489
  33. 33.
    Wang Y, Wu L (2018) Beyond low-rank representations: orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering. Neural Netw 103:1–8CrossRefGoogle Scholar
  34. 34.
    Wu B, Jia F, Liu W, Ghanem B, Lyu S (2018) Multi-label learning with missing labels using mixed dependency graphs. International Journal of Computer Vision 126(8):875–896MathSciNetCrossRefGoogle Scholar
  35. 35.
    Wu L, Wang Y, Li X, Gao J (2018) What-and-where to match: deep spatially multiplicative integration networks for person re-identification. Pattern Recogn 76:727–738CrossRefGoogle Scholar
  36. 36.
    Wu L, Wang Y, Li X, Gao J (2018) Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Transactions on Cybernetics.  https://doi.org/10.1109/TCYB.2018.2813971
  37. 37.
    Wu L, Wang Y, Gao J, Li X (2018) Deep adaptive feature embedding with local sample distributions for person re-identification. Pattern Recogn 73:275–288CrossRefGoogle Scholar
  38. 38.
    Xiang Y, Zhou X, Liu Z, Chua TS, Ngo CW (2010) Semantic context modeling with maximal margin conditional random fields for automatic image annotation. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp 3368–3375Google Scholar
  39. 39.
    Xie L, Pan P, Lu Y (2015) Markov random field based fusion for supervised and semi-supervised multi-modal image classification. Multimed Tools Appl 74(2):613–634CrossRefGoogle Scholar
  40. 40.
    Xu H, Huang C, Pan P, Zhao G, Xu C, Lu Y, Chen D, Wu J (2015) Image retrieval based on multi-concept detector and semantic correlation. Sci China Inf Sci 58(12):1–15CrossRefGoogle Scholar
  41. 41.
    Xu C, Lu C, Liang X, Gao J, Zheng W, Wang T, Yan S (2016) Multi-loss Regularized Deep Neural Network. IEEE Trans Circ Syst Video Technol 26 (12):2273–2283CrossRefGoogle Scholar
  42. 42.
    Zhang S, Huang J, Li H, Metaxas D (2012) Automatic image annotation and retrieval using group sparsity. IEEE Trans Syst Man Cybern Part B: Cybern 42 (3):838–849CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Information Technology in EducationSouth China Normal UniversityGuangzhouChina
  2. 2.Guangdong Engineering Research Center for Smart LearningSouth China Normal UniversityGuangzhouChina
  3. 3.School of Computing and MathematicsCharles Sturt UniversityAlburyAustralia

Personalised recommendations