Skip to main content
Log in

A New Invariant to Illumination Feature Descriptor for Pattern Recognition

  • MATHEMATICAL MODELS AND COMPUTATIONAL METHODS
  • Published:
Journal of Communications Technology and Electronics Aims and scope Submit manuscript

Abstract—A new descriptor for describing features in gray-scale images that is invariant to nonuniform illumination is proposed. The suggested method for the feature descriptor design is based on a local energy model which is a biologically plausible model of the visual system. The algorithm for feature detection and construction of the descriptor uses the scale-space monogenic signal framework and a modified algorithm for calculation of the histogram of oriented gradients based on the phase congruence of the signals. The results of computer simulation show that the proposed descriptor provides excellent detection and matching of features at nonuniform illumination, noise, and minor geometric distortions in comparison with known descriptors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

Similar content being viewed by others

REFERENCES

  1. A. Andreopoulos and J. K. Tsotsos, “50 years of object recognition: Directions forward,” Comput. Vision Image Understanding 117, 827–891 (2013).

    Article  Google Scholar 

  2. BVK V. Kumar, A. Mahalanobis, and R. D. Juday, Correlation Pattern Recognition (Cambridge Univ. Press, Cambridge, 2005).

    Book  MATH  Google Scholar 

  3. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (Wiley, 2007).

    MATH  Google Scholar 

  4. T. Tuytelaars, K. Mikolajczyk, et al. “Local invariant feature detectors: a survey,” Foundat. and Trends in Comput. Graph. & Vision 3,177–280 (2008).

    Article  Google Scholar 

  5. D. H. Hubel, J. Wensveen, and B. Wick, Eye, Brain, and Vision (Sci. Am. Library, New York, 1995).

    Google Scholar 

  6. E. Gladilin and R. Eils, “On the role of spatial phase and phase correlation in vision, illusion, and cognition,” Frontiers in Comput. Neurosci. 9, 45 (2015).

    Article  Google Scholar 

  7. F. Attneave, “Some informational aspects of visual perception,” Psycholog. Rev. 61, 183 (1954).

    Article  Google Scholar 

  8. Morrone M. Concetta, J. Ross, D. C. Burr, and R. Owens, “Mach bands are phase dependent,” Nature 324 (6094), 250–253 (1986).

    Article  Google Scholar 

  9. Morrone M. Concetta and R. A. Owens, “Feature detection from local energy,” Pattern Recogn. Lett. 6, 303–313 (1987).

    Article  Google Scholar 

  10. Morrone M. Concetta and D. C. Burr, “Feature detection in human vision: A phase-dependent energy model,” Proc. R. Soc. B: Biolog. Sci. 235 (1280), 221–245 (1988).

  11. B. Robbins and R. Owens, “2d feature detection via local energy,” Image and Vision Comput. 15, 353–368 (1997).

    Article  Google Scholar 

  12. P. Kovesi, “Image features from phase congruency,” J. Comp. Vision Res. 1 (3), 1–26 (1999).

  13. P. Kovesi et al., “Edges are not just steps,” in Proc. 5th Asian Conf. Comput. Vision, (ACCV 2002), Melbourne, 2002, Vol. 8, pp. 22–8.

  14. P. Kovesi, “Phase congruency: A low-level image invariant,” Psycholog. Res. 64, 136–148 (2000).

    Article  Google Scholar 

  15. M. Felsberg and G. Sommer, “The monogenic signal,” IEEE Trans. Signal Process. 49, 3136–3144 (2001).

    Article  MathSciNet  MATH  Google Scholar 

  16. M. Felsberg and G. Sommer, “The monogenic scale-space: A unifying approach to phase-based image processing in scale-space,” J. Math. Imag. and Vision 21 (1), 5–26 (2004).

    Article  MathSciNet  Google Scholar 

  17. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. Alvey Vision Conf., CiteSeer, 15, 10–5244 (1988).

  18. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Comput. Vision and Pattern Recogn. (2005).

    Book  Google Scholar 

  19. J. Diaz-Escobar and V. Kober, “A robust hog-based descriptor for pattern recognition,” SPIE Opt. Eng.+ Appl., pp. 99712.

  20. V. H. Diaz-Ramirez, K. Picos, and V. Kober, “Target tracking in nonuniform illumination conditions using locally adaptive correlation filters,” Opt. Comm. 323, 32–43 (2014).

    Article  Google Scholar 

  21. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).

    Article  Google Scholar 

  22. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Proc. Eur. Conf.on Computer Vision, 2006 (Springer-Verlag, 2006), pp. 404–417.

Download references

ACKNOWLEDGMENTS

This work was supported by the Russian Science Foundation, grant no. 15-19-10010.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to J. Diaz-Escobar, V. I. Kober or V. N. Karnaukhov.

Additional information

Translated by A. Ivanov

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Diaz-Escobar, J., Kober, V.I., Karnaukhov, V.N. et al. A New Invariant to Illumination Feature Descriptor for Pattern Recognition. J. Commun. Technol. Electron. 63, 1469–1474 (2018). https://doi.org/10.1134/S1064226918120045

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064226918120045

Navigation