Low-Level Greyscale Image Descriptors Applied for Intelligent and Contextual Approaches

  • Dariusz FrejlichowskiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11432)


The process of image recognition and understanding is not always a trivial task. The automatic analysis of the image content can be difficult and not obvious. Usually, it requires the identification of particular objects visible in a scene, however, this assumption not always provides the expected results. In many cases, the whole context of an image or relations between objects provide important information about an image and can lead to other conclusions than in case of the analysis of single objects separately. Hence, the obtained result can be considered more ‘intelligent’. The contextual analysis of images can be based on various features. Amongst them the low-level descriptors are successfully applied in the problem of image analysis and recognition. Using the obtained representations of objects one can conclude the context of an image as a whole. In the paper the possibility of applying selected greyscale descriptors in the intelligent systems is analytically and experimentally analyzed. The works have been performed by means of algorithms employing the transformation of pixels from Cartesian into polar co-ordinates.


Image recognition Object identification Greyscale descriptors Polar co-ordinates 


  1. 1.
    Verma, M., Raman, B.: Center symmetric local binary co-occurrence pattern for texture, face and bio-medical image retrieval. J. Vis. Commun. Image Represent. 32, 224–236 (2015)CrossRefGoogle Scholar
  2. 2.
    Shu, H., Zhang, H., Chen, B., Haigron, P., Luo, L.: Fast computation of Tchebichef moments for binary and grayscale images. IEEE Trans. Image Process 19(12), 3171–3180 (2010)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  4. 4.
    Ke, Y., Sukthankar, R.: PCA-SIFT: a more distinctive representation for local image descriptors. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), vol. 2, pp. II-506–II-513 (2004)Google Scholar
  5. 5.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  6. 6.
    Shui, P., Zhang, W.: Corner detection and classification using anisotropic directional derivative representations. IEEE Trans. Image Process. 22(8), 3204–3218 (2013)CrossRefGoogle Scholar
  7. 7.
    Jurie, F., Schmid, C.: Scale-invariant shape features for recognition of object categories. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 2004), vol. 2, pp. II-90–II-96 (2004)Google Scholar
  8. 8.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, vol. 1, pp. 886–893 (2005)Google Scholar
  9. 9.
    Gbèhounou, S., Lecellier, F., Fernandez-Maloigne, C.: Evaluation of local and global descriptors for emotional impact recognition. J. Vis. Commun. Image Represent. 38, 276–283 (2016)CrossRefGoogle Scholar
  10. 10.
    Chin, T.J., Suter, D., Wang, H.: Boosting histograms of descriptor distances for scalable multiclass specific scene recognition. Image Vis. Comput. 29(4), 241–250 (2011)CrossRefGoogle Scholar
  11. 11.
    Vu, N.S., Dee, H.M., Caplier, A.: Face recognition using the POEM descriptor. Pattern Recogn. 45(7), 2478–2488 (2012)CrossRefGoogle Scholar
  12. 12.
    Castrillón-Santana, M., de Marsico, M., Nappi, M., Riccio, D.: MEG: texture operators for multi-expert gender classification. Comput. Vis. Image Underst. 156, 4–18 (2017)CrossRefGoogle Scholar
  13. 13.
    Kumar, M., Singh, Kh.M: Retrieval of head–neck medical images using Gabor filter based on power-law transformation method and rank BHMT. Signal Image Video Process. 12(5), 827–833 (2018)CrossRefGoogle Scholar
  14. 14.
    Dharmagunawardhana, C., Mahmoodi, S., Bennett, M., Niranjan, M.: Gaussian Markov random field based improved texture descriptor for image segmentation. Image Vis. Comput. 32(11), 884–895 (2014)CrossRefGoogle Scholar
  15. 15.
    Nanni, L., Brahnam, S., Lumini, A.: A simple method for improving local binary patterns by considering non-uniform patterns. Pattern Recogn. 45(10), 3844–3852 (2012)CrossRefGoogle Scholar
  16. 16.
    Nanni, L., Melucci, M.: Combination of projectors, standard texture descriptors and bag of features for classifying images. Neurocomputing 173, 1602–1614 (2016)CrossRefGoogle Scholar
  17. 17.
    Florindo, J.B., Landini, G., Bruno, O.M.: Three-dimensional connectivity index for texture recognition. Pattern Recogn. Lett. 84, 239–244 (2016)CrossRefGoogle Scholar
  18. 18.
    Faraki, M., Harandi, M.T., Wiliem, A., Lovell, B.C.: Fisher tensors for classifying human epithelial cells. Pattern Recogn. 47(7), 2348–2359 (2014)CrossRefGoogle Scholar
  19. 19.
    Wang, S., et al.: Texture analysis method based on fractional Fourier entropy and fitness-scaling adaptive genetic algorithm for detecting left-sided and right-sided sensorineural hearing loss. Fundamenta Informaticae 151(1–4), 505–521 (2017)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Aptoula, E., Lefèvre, S.: Morphological texture description of grey-scale and color images. Adv. Imaging Electron Phys. 169, 1–74 (2011)CrossRefGoogle Scholar
  21. 21.
    Florindo, J.B., Bruno, O.M.: Local fractal dimension and binary patterns in texture recognition. Pattern Recogn. Lett. 78, 22–27 (2016)CrossRefGoogle Scholar
  22. 22.
    Frejlichowski, D.: Identification of erythrocyte types in greyscale MGG images for computer-assisted diagnosis. In: Vitrià, J., Sanches, J.M., Hernández, M. (eds.) IbPRIA 2011. LNCS, vol. 6669, pp. 636–643. Springer, Heidelberg (2011). Scholar
  23. 23.
    Frejlichowski, D.: Application of the Polar-Fourier Greyscale Descriptor to the problem of identification of persons based on ear images. In: Choraś, R.S. (ed.) Image Processing and Communications Challenges 3. AISC, vol. 102, pp. 5–12. Springer, Heidelberg (2011). Scholar
  24. 24.
    Frejlichowski, D.: An experimental evaluation of the Polar-Fourier Greyscale Descriptor in the recognition of objects with similar silhouettes. In: Bolc, L., Tadeusiewicz, R., Chmielewski, L.J., Wojciechowski, K. (eds.) ICCVG 2012. LNCS, vol. 7594, pp. 363–370. Springer, Heidelberg (2012). Scholar
  25. 25.
    Frejlichowski, D.: Application of the Polar-Fourier Greyscale Descriptor to the automatic traffic sign recognition. In: Kamel, M., Campilho, A. (eds.) ICIAR 2015. LNCS, vol. 9164, pp. 506–513. Springer, Cham (2015). Scholar
  26. 26.
    Frejlichowski, D.: A new algorithm for greyscale objects representation by means of the polar transform and vertical and horizontal projections. In: Nguyen, N.T., Hoang, D.H., Hong, T.-P., Pham, H., Trawiński, B. (eds.) ACIIDS 2018. LNCS (LNAI), vol. 10752, pp. 617–625. Springer, Cham (2018). Scholar
  27. 27.
    Hupkens, T.M., de Clippeleir, J.: Noise and intensity invariant moments. Pattern Recogn. Lett. 16(4), 371–376 (1995)CrossRefGoogle Scholar
  28. 28.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1453–1460 (2011)Google Scholar
  29. 29.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Burney, A., Syed, T.Q.: Crowd video classification using convolutional neural networks. In: International Conference on Frontiers of Information Technology (FIT), Islamabad, pp. 247–251 (2016)Google Scholar
  31. 31.
    Sermanet, P., LeCun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: Proceedings of the International Joint Conference on Neural Networks, San Jose, pp. 2809–2813 (2011)Google Scholar
  32. 32.
    Kagaya, H., Aizawa, K., Ogawa, M.: Food detection and recognition using convolutional neural network. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 1085–1088 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Computer Science and Information TechnologyWest Pomeranian University of Technology, SzczecinSzczecinPoland

Personalised recommendations