Advertisement

Asymmetry Level in Cleft Lip Children Using Dendrite Morphological Neural Network

  • Griselda CortésEmail author
  • Fabiola Villalobos
  • Mercedes Flores
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11524)

Abstract

Approximately 3% of live newborn children suffer from cleft lip syndrome. Technology can be used in the medical area to help treatment for patients with Congenital Facial Abnormalities in cases with oral fissures. Facial dysmorphism classification level is relevant to physicians, since there are no tools to determine its degree. In this paper, a mobile application is proposed to process and analyze images, implementing the DLIB algorithm to map different face areas in healthy pediatric children, and those presenting cleft lip syndrome. Also, two repositories are created: 1. Contains all extracted facial features, 2. Stores training patterns to classify the severity level that the case presents. Finally, the Dendrite Morphological Neural Network (DMNN) algorithm was selected according to different aspects such as: one of the highest performance methods compared with MLP, SVM, IBK, RBF, easy implementation used in mobile applications with the most efficient landmarks mapping compared with OpenCV and Face detector.

Keywords

Facial dystrophy Cleft lip Mobile technology Dendrite Morphological Neural Network 

Notes

Acknowledgements

Griselda Cortés and Mercedes Flores wish to thank COMECYT and Tecnológico de Estudios Superiores de Ecatepec (TESE) for its support in development of this project. The authors also thank Juan C. Guzmán, Itzel Saldivar and Diana López for their work and dedication to this project with which they aim to obtain the degree in computer systems engineering in TESE.

References

  1. 1.
    Christopher, D.: http://www.drderderian.com. Accessed 11 Feb 2018
  2. 2.
    Liu, M., et al.: Landmark-based deep multi-instance learning for brain disease diagnosis. Med. Image Anal. 43, 157–168 (2018)CrossRefGoogle Scholar
  3. 3.
    Tang, X., et al.: Facial landmark detection by semi-supervised deep learning. Neurocomputing 297, 22–32 (2018).  https://doi.org/10.1016/j.neucom.2018.01.080CrossRefGoogle Scholar
  4. 4.
    Li, Y., et al.: Face recognition based on recurrent regression neural network. Neurocomputing 297, 50–58 (2018).  https://doi.org/10.1016/j.neucom.2018.02.037CrossRefGoogle Scholar
  5. 5.
    Deng, W., et al.: Facial landmark localization by enhanced convolutional neural network. Neurocomputing 273, 222–229 (2018).  https://doi.org/10.1016/j.patrec.2016.07.005CrossRefGoogle Scholar
  6. 6.
    Haoqiang, F., Erjin, Z.: Approaching human level facial landmark localization by deep learning. Image Vis. Comput. 47, 27–35 (2016).  https://doi.org/10.1016/j.imavis.2015.11.004CrossRefGoogle Scholar
  7. 7.
    Smith, M., et al.: Continuous face authentication scheme for mobile devices with tracking and liveness detection. Microprocess. Microsyst. 63, 147–157 (2018).  https://doi.org/10.1016/j.micpro.2018.07.008CrossRefGoogle Scholar
  8. 8.
    Barman, A., Paramartha, D.: Facial expression recognition using distance and shape signature features. Pattern Recogn. Lett. 000, 1–8 (2017).  https://doi.org/10.1016/j.procs.2017.03.069CrossRefGoogle Scholar
  9. 9.
    Josué Daniel, E.: Makoa: Aplicación para visualizar imágenes médicas formato DICOM en dispositivos móviles iPad. http://132.248.52.100:8080/xmlui/handle/132.248.52.100/7749. Accessed 22 Apr 2017
  10. 10.
    Jones, A.L.: The influence of shape and colour cue classes on facial health perception. Evol. Hum. Behav. 19–29 (2018).  https://doi.org/10.1016/j.evolhumbehav.2017.09.005
  11. 11.
    Mirsha, et al.: Effects of hormonal treatment, maxilofacial surgery orthodontics, traumatism and malformation on fluctuating asymmetry. Revista Argentina de Antropología Biológica 1–15 (2018).  https://doi.org/10.1002/ajhb.20507
  12. 12.
    DLIB: (2008). http://dlib.net/. Accessed 25 Oct 2017
  13. 13.
    Yang, D., et al.: An emotion recognition model based on facial recognition in virtual learning environment. Procedia Comput. Sci. 125, 2–10 (2018).  https://doi.org/10.1016/j.procs.2017.12.003CrossRefGoogle Scholar
  14. 14.
    Tarnowski, P., et al.: Emotion recognition using facial expressions. Procedia Comput. Sci. 108, 1175–1184 (2017)CrossRefGoogle Scholar
  15. 15.
    Al Anezi, T., Khambay, B., Peng, M.J., O’Leary, E., Ju, X., Ayoub, A.: A new method for automatic tracking of facial landmarks in 3D motion captured images (4D). Int. J. Oral Maxillofac. Surg. 42(1), 9–18 (2013)CrossRefGoogle Scholar
  16. 16.
    Fan, H., Zhou, E.: Approaching human level facial landmark localization by deep learning. Image Vis. Comput. 47, 27–35 (2016)CrossRefGoogle Scholar
  17. 17.
    Liu, S., Fan, Y.Y., Guo, Z., Samal, A., Ali, A.: A landmark-based data-driven approach on 2.5D facial attractiveness computation. Neurocomputing 238, 168–178 (2017).  https://doi.org/10.1016/j.neucom.2017.01.050CrossRefGoogle Scholar
  18. 18.
    Economou, S., et al.: Evaluation of facial asymmetry in patients with juvenile idiopathic arthritis: Correlation between hard tissue and soft tissue landmarks. Am. J. Orthod. Dentofac. Orthop. 153(5), 662–672 (2018).  https://doi.org/10.1016/j.ajodo.2017.08.022CrossRefGoogle Scholar
  19. 19.
    Hallac, R.R., Feng, J., Kane, A.A., Seaward, J.R.: Dynamic facial asymmetry in patients with repaired cleft lip using 4D imaging (video stereophotogrammetry). J. Cranio-Maxillo-Fac. Surg. 45, 8–12 (2017).  https://doi.org/10.1016/j.jcms.2016.11.005CrossRefGoogle Scholar
  20. 20.
    Al-Rudainy, D., Ju, X., Stanton, S., Mehendale, F.V., Ayoub, A.: Assessment of regional asymmetry of the face before and after surgical correction of unilateral cleft lip. J. Cranio-Maxillofac. Surg. 46(6), 974–978 (2018)CrossRefGoogle Scholar
  21. 21.
    Chen, F., et al.: 2D facial landmark model design by combining key points and inserted points. Expert Syst. Appl 7858–7868 (2015).  https://doi.org/10.1016/j.eswa.2015.06.015
  22. 22.
    Vezzetti, E., Marcolin, F.: 3D human face description: landmarks measures and geometrical features. Image Vis. Comput. 30, 698–712 (2012).  https://doi.org/10.1016/j.imavis.2012.02.007CrossRefGoogle Scholar
  23. 23.
    Faces +, Faces + (Cirugía plástica, dermatología, piel y láser) de San Diego, California. http://www.facesplus.com/procedure/craniofacial/congenital-disorders/cleft-lip-and-palate/#results. Accessed 02 Mar 2017
  24. 24.
    New Mexico Cleft Palate Center. http://nmcleft.org/?page_id=107. Accessed 02 Mar 2017
  25. 25.
    Pediatric Plastic Surgery: Hospital de niños en St. Louis. http://www.stlouischildrens.org/our-services/plastic-surgery/photo-gallery. Accessed 10 Mar 2017
  26. 26.
    My UK Healt Care: Hospital de niños en Kentucky. http://ukhealthcare.uky.edu/kch/liau-blog/gallery/. Accessed 22 Mar 2017
  27. 27.
    Sossa, H., Guevara, E.: Efficient training for dendrite morphological neural networks. Neurocomputing 132–142 (2014).  https://doi.org/10.1016/j.neucom.2013.10.031
  28. 28.
    Vega, R., et al.: Retinal vessel extraction using lattice neural networks with dendritic processing. Comput. Biol. Med. 20–30 (2015).  https://doi.org/10.1016/j.compbiomed.2014.12.016
  29. 29.
    Zamora, E., Sossa, H.: Dendrite morphological neurons trained by stochastic gradient descent. Neurocomputing 420–431 (2017).  https://doi.org/10.1016/j.neucom.2017.04.044

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Tecnológico de Estudios Superiores de EcatepecEcatepec de MorelosMexico

Personalised recommendations