Skip to main content
Log in

Content-based image retrieval for categorized dataset by aggregating gradient and texture features

  • Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Content-based image retrieval is the process of retrieving images similar to the Query Image. The task of finding the dissimilarity among the different objects is the primary challenge in the content-based retrieval system. The challenge increases when the dissimilarity among similar objects is less. For instance, retrieving sunflowers from the dataset having different types of objects (i.e., the uncategorized dataset) like faces, rose, ball, mountains, etc. is a challenging task but what if the sunflowers are to be retrieved from the dataset having flowers only (i.e., the categorized dataset)? Such scenarios make content-based image retrieval more challenging because now the objects are very much similar to each other (categorized) and finding dissimilarity among them is a difficult task. In the state-of-the-art approaches, much work has been done on uncategorized data as compared to categorized one. The present paper proposes an image retrieval approach out of the objects of the same category as of Query Image. Gradient and texture features have been used for the retrieval of images out of the objects of the same category. The Proposed Approach has been applied to various categories of objects like flowers (roses, lily, sunflower), toys (cars, bears, frogs), dogs (Dalmatian, Pomeranian), etc. The Proposed Approach outperforms the state-of-the-art approaches by obtaining an average accuracy of 90\(\%\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Abbadeni N (2011) Computational perceptual features for texture representation and retrieval. IEEE Trans Image Process 20(1):236–246

    Article  MathSciNet  Google Scholar 

  2. Azulay A, Weiss Y (2018) Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177

  3. Babenko A, Lempitsky V (2015) Aggregating local deep features for image retrieval. In: Proceedings of the IEEE international conference on computer vision, pp. 1269–1277

  4. Bay H, Tuytelaars T, Van Gool L (2006) Surf: speeded up robust features. In: European conference on computer vision, pp. 404–417. Springer

  5. Beis JS, Lowe DG (1997) Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: CVPR, vol. 97, p. 1000. Citeseer

  6. Busa-Fekete R, Szarvas G, Elteto T, Kégl B (2012) An apple-to-apple comparison of learning-to-rank algorithms in terms of normalized discounted cumulative gain. In: ECAI 2012-20th European conference on artificial intelligence: preference learning: problems and applications in AI workshop, vol. 242. Ios Press

  7. Carroll HD, Kann MG, Sheetlin SL, Spouge JL (2010) Threshold average precision (tap-k): a measure of retrieval designed for bioinformatics. Bioinformatics 26(14):1708–1713

    Article  Google Scholar 

  8. Cevikalp H, Elmas M, Ozkan S (2016) Towards category based large-scale image retrieval using transductive support vector machines. In: European conference on computer vision, pp. 621–637. Springer

  9. Chatterjee A (2019 ) Why pooling is not the answer to every problem. https://mc.ai/why-pooling-is-not-the-answer-to-every-problem/ (4 April, 2019)

  10. Cimpoi M, Maji S, Kokkinos I, Mohamed S, Vedaldi A (2014)Describing textures in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3606–3613

  11. Corridoni JM, Del Bimbo A, Pala P (1999) Image retrieval by color semantics. Multimed Syst 7(3):175–183

    Article  Google Scholar 

  12. Dang QB, Le VP, Luqman MM, Coustaty M, Tran C, Ogier JM (2015) Camera-based document image retrieval system using localfeatures-comparing srif with llah, sift, surf and orb. In: 2015 13th International conference on document analysis and recognition (ICDAR), pp. 1211–1215. IEEE

  13. Deng Y, Manjunath B, Kenney C, Moore MS, Shin H (2001) An efficient color representation for image retrieval. IEEE Trans Image Process 10(1):140–147

    Article  Google Scholar 

  14. Divya Srivastava RB, Agarwal S (2020) Gradient feature basedclassification of patterned images. In: 2nd International conference on computing, communications and cyber security. Springer

  15. Fei-Fei L, Fergus R, Perona P (2007) Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. Comput Vis Image Underst 106(1):59–70

    Article  Google Scholar 

  16. Galshetwar G, Waghmare LM, Gonde AB, Murala S (2019) Local energy oriented pattern for image indexing and retrieval. J Vis Commun Image Represent 64:102,615

    Article  Google Scholar 

  17. Haji MS, Alkawaz MH, Rehman A, Saba T (2019) Content-based image retrieval: a deep look at features prospectus. Int J Comput Vis Robot 9(1):14–38

    Article  Google Scholar 

  18. kaggle: 16,000 images of four basic shapes. https://www.kaggle.com/smeschke/four-shapes/ (2017)

  19. Kanwal K, Ahmad KT, Khan R, Abbasi AT, Li J (2020) Deep learning using symmetry, fast scores, shape-based filtering and spatial mapping integrated with cnn for large scale image retrieval. Symmetry 12(4):612

    Article  Google Scholar 

  20. Ke Y, Sukthankar R et al (2004) Pca-sift: a more distinctive representation for local image descriptors. CVPR 2(4):506–513

    Google Scholar 

  21. Kim J, Yoon SE (2018) Regional attention based deep feature for image retrieval. In: Proc. British machine vision conference (BMVC), Newcastle, England

  22. Lee YH, Kim Y (2015) Efficient image retrieval using advanced surf and dcd on mobile platform. Multimed Tools Appl 74(7):2289–2299

    Article  Google Scholar 

  23. Li J, Wang JZ (2003) Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans Pattern Anal Mach Intell 25(9):1075–1088

    Article  Google Scholar 

  24. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  25. Manjunath BS, Ma WY (1996) Texture features for browsing and retrieval of image data. IEEE Trans Pattern Anal Mach Intell 18(8):837–842

    Article  Google Scholar 

  26. Memon MH, Li JP, Memon I, Arain QA (2017) Geo matching regions: multiple regions of interests using content based image retrieval based on relative locations. Multimed Tools Appl 76(14):15377–15411

    Article  Google Scholar 

  27. Mojsilovic A, Kovacevic J, Hu J, Safranek RJ, Ganapathy SK (2000) Matching and retrieval based on the vocabulary and grammar of color patterns. IEEE Trans Image Process 9(1):38–54

    Article  Google Scholar 

  28. Nene SA, Nayar SK, Murase H et al (1996) Columbia object image library (coil-20). coil

  29. Nilsback, M.E., Zisserman, A.: A visual vocabulary for flower classification. In: 2006 IEEE Computer society conference on computer vision and pattern recognition (CVPR’06), vol. 2, pp. 1447–1454. IEEE (2006)

  30. Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Large-scale image retrieval with attentive deep local features. In: Proceedings of the IEEE international conference on computer vision, pp. 3456–3465 (2017)

  31. Nowak, E., Jurie, F., Triggs, B.: Sampling strategies for bag-of-features image classification. In: European conference on computer vision, pp. 490–503. Springer (2006)

  32. O’Hara, S., Draper, B.A.: Introduction to the bag of features paradigm for image classification and retrieval. arXiv preprint arXiv:1101.3354 (2011)

  33. Perronnin, F., Liu, Y., Sánchez, J., Poirier, H.: Large-scale image retrieval with compressed fisher vectors. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3384–3391. IEEE (2010)

  34. Shrivastava, N., Tyagi, V.: Region based image retrieval using integrated color, texture and shape features. In: Information systems design and intelligent applications, pp. 309–316. Springer (2015)

  35. Sivic, J., Zisserman, A.: Video google: A text retrieval approach to object matching in videos. In: null, p. 1470. IEEE (2003)

  36. Smeulders AW, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22(12):1349–1380

    Article  Google Scholar 

  37. Srivastava D, Bakthula R, Agarwal S (2018) Image classification using surf and bag of lbp features constructed by clustering with fixed centers. Multimed Tools Appl 78:1–25

    Google Scholar 

  38. Srivastava, D., Rajitha, B., Agarwal, S.: An efficient image classification using bag-of-words based on surf and texture features. In: 2017 14th IEEE India council international conference (INDICON), pp. 1–6. IEEE (2017)

  39. Srivastava D, Rajitha B, Agarwal S, Singh S (2018) Pattern-based image retrieval using glcm. Neural Comput Appl 32:1–14

    Google Scholar 

  40. Zhang J, Wang S, Huang Q (2017) Location-based parallel tag completion for geo-tagged social image retrieval. ACM Trans Intell Syst Technol (TIST) 8(3):38

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Divya Srivastava.

Ethics declarations

Conflict of interest

We, the authors of this paper, declare that we have no conflict of interest. The paper submitted is our own original work has not been submitted to any other journal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Srivastava, D., Rajitha, B. & Agarwal, S. Content-based image retrieval for categorized dataset by aggregating gradient and texture features. Neural Comput & Applic 33, 12247–12261 (2021). https://doi.org/10.1007/s00521-020-05614-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05614-y

Keywords

Navigation