Identification and analysis of photometric points on 2D facial images: a machine learning approach in orthodontics
- 43 Downloads
The lack of an effective and automated facial landmark identification tool has prompted us to design and develop a smart machine learning approach. The study aims to address two objectives. The primary objective is to assess the effectiveness and accuracy of algorithmic methodology in identifying and analysing facial landmarks on two dimensional (2D) facial images and the secondary objective is to understand the clinical application of automation in facial landmark identification. The study has utilised 418 facial landmark points and 220 landmark measures from 22 2D facial images of volunteers. The study has used a deep learning algorithm ‘You Only Look Once (YOLO)’ to determine the accuracy of the developed system and its clinical applications. The system identified 418 landmarks in total with facial recognition being 100%. Of the total 220 landmark measures, the system provided 48 (21.81%) measures in the error range of 0 to 1 mm, 75 (34.09%) measures in the error range of 2 to 3 mm, 92 (41.81%) measures in the error range of 4 to 5 mm followed by 5 (2.2%) measures in the range of 6 mm. The smart and innovative approach provides valuable training and a helpful tool for the students performing the clinical facial analysis. The automated system with its effective and efficient algorithm delivers fast and reliable landmark identification and analysis.
KeywordsOrthodontic photometric points Orthodontic facial measures Frontal facial photography YOLO Machine learning algorithm Deep learning Orthodontics Smart learning
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
No ethical approval was required as the study only utilised images of volunteers faces. Suitable written consent was taken from each of the volunteers before the start of the study.
Informed written consent was obtained from all individual participants included in the study.
- 2.Posnick JC, Farkas LG. The application of anthropometric surface measurements in craniomaxillofacial surgery. In: Anthropometry of the head and face. New York: Raven Press; 1994. p. 125–38.Google Scholar
- 4.Redmon J, Divvala S, Girshick R, Farhadi A. YOLO: you only look once: unified, real-time object detection. Proc IEEE Conf Comput Vis Pattern Recognit. 2016:779–88.Google Scholar
- 8.Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild Challenge (300-W). Sydney, Australia; 2013.Google Scholar
- 9.Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA; 2013.Google Scholar
- 10.Liu Z, Luo P, Wang X, Tang X. Deep Learning Face Attributes in the Wild. ICCV '15 Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), p. 3730–3738.Google Scholar
- 11.Hui J. mAP (mean Average Precision) for Object Detection: A Medium Corporation; 2018 [Available from: https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173. Accessed 19 Feb 2019
- 12.Iqtait M, Mohamad FS, Mamat M. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM). IORA-ICOR 2017, IOP Conf. Series: Materials Science and Engineering. 2018;332:012032.Google Scholar