Skip to main content

Eye Like Landmarks Extraction and Patching for Face Detection Using Deep Neural Network

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1036))

Abstract

This paper presents a novel technique to extract facial Eye Like Landmarks (ELL) for face detection. The proposed technique pre-processes input images for skin segmentation using a minimal approach which segments all the skin regions in a fragmented manner. Morphological closing further enhances fragmented skin regions. Contrast Limited Adaptive Histogram Equalization (CLAHE) is used to enhance eye like regions. The ELL are extracted from skin regions By using divide and conquer method with a heuristically calculated threshold. While Extracting ELL the rotation and scale are taken into consideration. The pair of ELL which satisfy criteria to be a possible eye pair is then combined to form a patched image which is smaller in dimension, hence easier and faster to process. Patched images are used to train a Deep Neural Network. The classification accuracy on the face and non-face patches is 97%. In the result section, we discuss the training and validation loss. The Extracted Patches can be further processed to extract features for robust face detection.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ahonen, T., Hadid, A., Pietikainen, M.: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)

    Article  Google Scholar 

  2. Al-Tairi, Z.H., Rahmat, R.W., Saripan, M.I., Sulaiman, P.S.: Skin segmentation using yuv and rgb color spaces. JIPS 10(2), 283–299 (2014)

    Google Scholar 

  3. Belhumeur, P.N., Jacobs, D.W., Kriegman, D.J., Kumar, N.: Localizing parts of faces using a consensus of exemplars. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2930–2940 (2013)

    Article  Google Scholar 

  4. Burgos-Artizzu, X.P., Perona, P., Dollár, P.: Robust face landmark estimation under occlusion. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1513–1520. IEEE (2013)

    Google Scholar 

  5. Campadelli, P., Lanzarotti, R., Savazzi, C.: A feature-based face recognition system. In: International Conference on Image Analysis and Processing, pp. 68–73. IEEE (2003)

    Google Scholar 

  6. Candemir, S., Borovikov, E., Santosh, K., Antani, S., Thoma, G.: RSILC: rotation-and scale-invariant, line-based color-aware descriptor. Image Vis. Comput. 42, 1–12 (2015)

    Article  Google Scholar 

  7. Cohn, J.F., Zlochower, A.J., Lien, J.J., Kanade, T.: Feature-point tracking by optical flow discriminates subtle differences in facial expression. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 396–401. IEEE (1998)

    Google Scholar 

  8. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Article  Google Scholar 

  9. Deng, Z., Li, K., Zhao, Q., Zhang, Y., Chen, H.: Effective face landmark localization via single deep network. arXiv preprint arXiv:1702.02719 (2017)

  10. Dornaika, F., Davoine, F.: Online appearance-based face and facial feature tracking. In: International Conference on Pattern Recognition, vol. 3, pp. 814–817. IEEE (2004)

    Google Scholar 

  11. Juhong, A., Pintavirooj, C.: Face recognition based on facial landmark detection. In: Biomedical Engineering International Conference (BMEiCON), pp. 1–4. IEEE (2017)

    Google Scholar 

  12. Klare, B.F., et al.: Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1931–1939 (2015)

    Google Scholar 

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  14. Lanitis, A., Taylor, C.J., Cootes, T.F.: Automatic face identification system using flexible appearance models. Image Vis. Comput. 13(5), 393–401 (1995)

    Article  Google Scholar 

  15. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1424–1445 (2000)

    Article  Google Scholar 

  16. Ranjan, R., Patel, V.M., Chellappa, R.: Hyperface: a deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 121–135 (2017)

    Article  Google Scholar 

  17. Santosh, K., Lamiroy, B., Wendling, L.: Integrating vocabulary clustering with spatial relations for symbol recognition. Int. J. Doc. Anal. Recogn. (IJDAR) 17(1), 61–78 (2014)

    Article  Google Scholar 

  18. Sawat, D.D., Hegadi, R.S.: Unconstrained face detection: a deep learning and machine learning combined approach. CSI Trans. ICT 5(2), 195–199 (2017)

    Article  Google Scholar 

  19. Sawat, D.D., Hegadi, R.S.: Lower facial curves extraction for unconstrained face detection in video. In: Rabindranath, B., Subir Kumar, S., Swastika, C. (eds.) Advances in Communication, Devices and Networking. LNEE, vol. 462, pp. 689–700. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-7901-6_75

    Chapter  Google Scholar 

  20. Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97–115 (2001)

    Article  Google Scholar 

  21. Ukil, S., Ghosh, S., Obaidullah, S.M., Santosh, K., Roy, K., Das, N.: Deep learning for word-level handwritten indic script identification. arXiv preprint arXiv:1801.01627 (2018)

  22. Uřičář, M., Franc, V., Hlaváč, V.: Facial Landmarks Detector Learned by the Structured Output SVM. In: Csurka, G., Kraus, M., Laramee, R.S., Richard, P., Braz, J. (eds.) VISIGRAPP 2012. CCIS, vol. 359, pp. 383–398. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38241-3_26

    Chapter  Google Scholar 

  23. Yang, S., Luo, P., Loy, C.C., Tang, X.: Faceness-net: face detection through deep facial part responses. IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 1845–1859 (2017)

    Article  Google Scholar 

  24. Yu, X., Huang, J., Zhang, S., Yan, W., Metaxas, D.N.: Pose-free facial landmark fitting via optimized part mixtures and cascaded deformable shape model. In: IEEE International Conference on Computer Vision (ICCV), pp. 1944–1951. IEEE (2013)

    Google Scholar 

  25. Zhang, Z., Ping, L., Chen Change, L., Xiaoou, T.: Facial landmark detection by deep multi-task learning. In: David, F., Tomas, P., Bernt, S., Tinne, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 94–108. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_7

    Chapter  Google Scholar 

  26. Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2879–2886. IEEE (2012)

    Google Scholar 

Download references

Acknowledgements

Authors thank the Ministry of Electronics and Information Technology (MeitY), New Delhi for granting Visvesvaraya Ph.D. fellowship through file no. PhD-MLA\(\backslash \)4(34)\(\backslash \)2014-15 Dated: 10/04/2015.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dattatray D. Sawat .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sawat, D.D., Hegadi, R.S., Hegadi, R.S. (2019). Eye Like Landmarks Extraction and Patching for Face Detection Using Deep Neural Network. In: Santosh, K., Hegadi, R. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2018. Communications in Computer and Information Science, vol 1036. Springer, Singapore. https://doi.org/10.1007/978-981-13-9184-2_36

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-9184-2_36

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-9183-5

  • Online ISBN: 978-981-13-9184-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics