Advertisement

Automatic Feature Extraction for CBIR and Image Annotation Applications

  • S. B. NemadeEmail author
  • S. P. Sonavane
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1025)

Abstract

In the area of information technology, organizing and indexing of the digital information is a primary concern. In CBIR system, one of the most significant issues is a semantic gap. Semantic gap refers to the difference between the features extracted from image and interpretation of features content in the image or within the regions by human. Hence, automatic image annotation has achieved momentum in the last few years. The objective of the automatic image annotation (AIA) is to allocate textual labels to the image that clearly describes content or objects in the image. Accuracy of the automated image annotation algorithm depends upon the feature extraction process. Therefore, effective feature extraction algorithm is essential. In this paper, feature extraction algorithm using Gabor filter is presented. Gabor filter through its multi-resolution capability successfully extracts effective features from images or regions obtained after segmentation. It is demonstrated that the Gabor filter generates low level, less number of features and accurate description of the image if filter with frequency response in the band of 50–75% of the total frequency is selected. These extracted features further reduce the complexity in the classification algorithms developed using statistical models or soft computing techniques.

Keywords

Gabor filter Feature extraction Image annotation CBIR 

References

  1. 1.
    Barnard, K., Duygulu, P., Freitas, N., Forsyth, D., Blei, D., Jordan, M.: Matching words and pictures. J. Mach. Learn. Res. 3, 1107–1135 (2003)zbMATHGoogle Scholar
  2. 2.
    Barnard, K., Forsyth, D.: Learning the semantics of words and pictures. In: International Conference on Computer Vision, ICCV, vol. 2, pp. 408–415. IEEE, Canada (2001)Google Scholar
  3. 3.
    Carbonetto, P., de Freitas, N., Barnard, K.: A statistical model for general contextual object recognition. In: Pajdla, T., Matas, J. (eds.) Computer Vision—ECCV, LNCS, vol. 3021, pp. 350–362. Springer, Heidelberg (2004)Google Scholar
  4. 4.
    Blei, D., Jordan, M.: Modeling annotated data. In: Proceedings of the 26th Annual International Conference on Research and Development in Information Retrieval, ACM SIGIR, Toronto, Canada, pp. 127–134 (2003)Google Scholar
  5. 5.
    Duygulu, P., Barnard, K., de Freitas, J.F.G., Forsyth, D.: Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) Computer Vision ECCV, LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)Google Scholar
  6. 6.
    Feng, S., Manmatha, R., Lavrenko, V.: Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, pp. 1002–1009. IEEE, USA (2004)Google Scholar
  7. 7.
    Feng, L., Bhanu, B.: Semantic concept co-occurrence patterns for image annotation and retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 38(4), 785–799 (2016)CrossRefGoogle Scholar
  8. 8.
    Wu, F., Wang, Z., Zhang, Z., Yang, Y., Luo, J.: Weakly semi-supervised deep learning for multi label image annotation. IEEE Trans. Big Data 1(3), 109–122 (2015)CrossRefGoogle Scholar
  9. 9.
    Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Proceedings of the 16th International Conference on Neural Information Processing Systems, NIPS, pp. 553–560. MIT Press, Canada (2003)Google Scholar
  10. 10.
    Moran, S., Lavrenko, V.: Sparse kernel learning for image annotation. In: Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, ICIMCS, pp. 545–552. ACM, USA (2014)Google Scholar
  11. 11.
    Monay, V., Gatica-Perez, D.: PLSA-based image auto-annotation: constraining the latent space. In: Proceedings of the 12th Annual ACM International Conference on Multimedia, pp. 348–351. ACM, USA (2004)Google Scholar
  12. 12.
    Yakhnenko, O., Honavar, V.: Annotating images and image objects using a hierarchical Dirichlet process model. In: Proceedings of the 9th International Workshop on Multimedia Data Mining, ACM SIGKDD, pp. 1–7. ACM, USA (2008)Google Scholar
  13. 13.
    Carneiro, G., Chan, A., Moreno, P., Vasconcelos, N.: Supervised learning of semantic classes for image annotation and retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 394–410 (2007)CrossRefGoogle Scholar
  14. 14.
    Yue, J., Li, Z., Liu, L., Fu, Z.: Content-based image retrieval using colour and texture fused features. Math. Comput. Model. 54(4), 1121–1127 (2011)CrossRefGoogle Scholar
  15. 15.
    Jalab, H.: Image retrieval system based on colour layout descriptor and Gabor filters. In: IEEE Conference on Open System, ICOS, pp. 32–36. IEEE, Malaysia (2011)Google Scholar
  16. 16.
    Guang-Hai, L., Jing-Yu Y.: Content-based image retrieval using computational visual attention model. Pattern Recogn. 48(8), 2554–2566 (2015) Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer Science & EngineeringWalchand College of EngineeringSangliIndia
  2. 2.Department of Information TechnologyWalchand College of EngineeringSangliIndia

Personalised recommendations