Pattern Recognition and Image Analysis

, Volume 23, Issue 3, pp 415–418 | Cite as

Comparative analysis of color- and grayscale-based feature descriptions for image recognition

  • M. Petrushan
  • Yu. Vermenko
  • D. Shaposhnikov
  • S. Anishchenko
Representation, Processing, Analysis, and Understanding of Images
  • 165 Downloads

Abstract

The method for evaluating the applicability of color- and grayscale-based feature spaces to the image recognition problem is considered. The Histogram of Oriented Gradients (HOG) is used as a descriptor of the image area. Color descriptions involve the gradient calculated from one of the channels of the HSV space or the CIECAM02 model [3]. Parametric optimization is performed for each descriptor to determine the gradient threshold and size of the image area. The Mahalanobis distance between descriptions of images of different classes is used as the optimality criterion. Feature spaces are analyzed in terms of classification of open and closed eyes. The description separability of eye images of different classes has proved to be higher when using color-based descriptors with adaptation to saturation.

Keywords

feature space descriptor description separability evaluation color model 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. M. Ferman, A. M. Tekalp, and R. Mehrotra, “Robust color histogram descriptors for video segment retrieval and identification,” IEEE Trans. Image Processing 11(5), 497–508 (2002).CrossRefGoogle Scholar
  2. 2.
    G. J. Burghouts and J.-M. Geusebroek, “Performance evaluation of local colour invariants,” Comput. Vision Image Understand. 113, 48–62 (2009).CrossRefGoogle Scholar
  3. 3.
    A. Koschan and M. Abidi, “Detection and classification of edges in color images,” Signal Processing Mag., Special Issue on Color Image Processing 22(1), 64–73 (2005).Google Scholar
  4. 4.
    K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intellig. 27(10), 1615–1630 (2005).CrossRefGoogle Scholar
  5. 5.
    Qiang Ji, Zhiwei Zhu, and Peilin Lan, “Real-time non-intrusive monitoring and prediction of driver fatigue,” IEEE Trans. Vehicular Tech. 53(4) 1052–1106 (2004).CrossRefGoogle Scholar
  6. 6.
    S. A. J. Winder and M. Brown, “Learning local image descriptors,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (Minneapolis, June 2007), pp. 1–8.Google Scholar
  7. 7.
    D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).CrossRefGoogle Scholar
  8. 8.
    J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Mach. Intellig. 23(12), 1338–1350 (2001).CrossRefGoogle Scholar
  9. 9.
    B. Basturk and D. Karaboga, “An Artificial Bee Colony (ABC) algorithm for numeric function optimization,” in Proc. IEEE Swarm Intelligence Symp. (Indianapolis, May 12–14, 2006).Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2013

Authors and Affiliations

  • M. Petrushan
    • 1
  • Yu. Vermenko
    • 1
  • D. Shaposhnikov
    • 1
  • S. Anishchenko
    • 1
  1. 1.A.B. Kogan Research Institute for Neurocybernetics of Southern Federal UniversityRostov-on-DonRussia

Personalised recommendations