Skip to main content
Log in

Enhanced facial expression recognition using 3D point sets and geometric deep learning

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

Facial expression recognition plays an essential role in human conversation and human–computer interaction. Previous research studies have recognized facial expressions mainly based on 2D image processing requiring sensitive feature engineering and conventional machine learning approaches. The purpose of the present study was to recognize facial expressions by applying a new class of deep learning called geometric deep learning directly on 3D point cloud data. Two databases (Bosphorus and SIAT-3DFE) were used. The Bosphorus database includes sixty-five subjects with seven basic expressions (i.e., anger, disgust, fearness, happiness, sadness, surprise, and neutral). The SIAT-3DFE database has 150 subjects and 4 basic facial expressions (neutral, happiness, sadness, and surprise). First, preprocessing procedures such as face center cropping, data augmentation, and point cloud denoising were applied on 3D face scans. Then, a geometric deep learning model called PointNet++ was applied. A hyperparameter tuning process was performed to find the optimal model parameters. Finally, the developed model was evaluated using the recognition rate and confusion matrix. The facial expression recognition accuracy on the Bosphorus database was 69.01% for 7 expressions and could reach 85.85% when recognizing five specific expressions (anger, disgust, happiness, surprise, and neutral). The recognition rate was 78.70% with the SIAT-3DFE database. The present study suggested that 3D point cloud could be directly processed for facial expression recognition by using geometric deep learning approach. In perspectives, the developed model will be applied for facial palsy patients to guide and optimize the functional rehabilitation program.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Magagna J (2012) The silent child: communication without Words. Routledge, pp 394

  2. Dornaika, F, and Raducanu B (2007) Efficient facial expression recognition for human robot interaction. International Work-Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg

  3. Hickson S et al (2019) Eyemotion: classifying facial expressions in VR using eye-tracking cameras. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 1:1626–1635

  4. Chen CH, Lee I-J, Lin L-Y (2015) Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. Res Dev Disabil 36:396–403

    Article  Google Scholar 

  5. Dulguerov P et al (1999) Review of objective topographic facial nerve evaluation methods. Am J Otol 20(5):672–8

    CAS  PubMed  Google Scholar 

  6. Sonawane B, Sharma P (2021) Review of automated emotion-based quantification of facial expression in Parkinson’s patients. The Visual Computer 37:1151–1167

  7. An F, Liu Z (2020) Facial expression recognition algorithm based on parameter adaptive initialization of CNN and LSTM. Vis Comput 36(3):483–498

    Article  Google Scholar 

  8. Kumar S, Bhuyan MK, Iwahori Y (2021) Multi-level uncorrelated discriminative shared Gaussian process for multi-view facial expression recognition. The Visual Computer 37:143–159

  9. Li K et al (2020) Facial expression recognition with convolutional neural networks via a new face cropping and rotation strategy. Vis Comput 36(2):391–404

    Article  Google Scholar 

  10. Savran A, Sankur B (2017) Non-rigid registration based model-free 3D facial expression recognition. Comput Vis Image Underst 162:146–165

    Article  Google Scholar 

  11. Savran A, Sankur B (2008) Non-rigid registration of 3D surfaces by deformable 2D triangular meshes. 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE

  12. Savran A, Sankur B, Taha Bilge M (2012) Regression-based intensity estimation of facial action units. Image Vis Comput 30(10):774–784

    Article  Google Scholar 

  13. Savran A, Sankur B, Taha Bilge M (2012) Comparative evaluation of 3D vs. 2D modality for automatic detection of facial action units. Pattern Recogn 45(2):767–782

    Article  Google Scholar 

  14. Alyuz N et al (2008) Component-based registration with curvature descriptors for expression insensitive 3D face recognition. 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition. IEEE

  15. Berretti S et al (2011) 3D facial expression recognition using SIFT descriptors of automatically detected keypoints. Vis Comput 27(11):1021

    Article  Google Scholar 

  16. Berretti S, Del Bimbo A, Pala P (2013) Automatic facial expression recognition in real-time from dynamic sequences of 3D face scans. Vis Comput 29(12):1333–1350

    Article  Google Scholar 

  17. Zarbakhsh P, Demirel H (2019) 4D facial expression recognition using multimodal time series analysis of geometric landmark-based deformations. Vis Comput: 1–15

  18. Qi CR et al (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, pp 1–10

  19. Qi CR et al (2017) PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 77–85. https://doi.org/10.1109/CVPR.2017.16

  20. Savran A et al (2008) Bosphorus database for 3D face analysis. European workshop on biometrics and identity management. Springer, Berlin, Heidelberg

  21. Rusu RB et al (2007) Towards 3D object maps for autonomous household robots, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 3191–3198. https://doi.org/10.1109/IROS.2007.4399309

  22. Keskar NS et al (2016) On large-batch training for deep learning: generalization gap and sharp minima. arXiv preprint arXiv:1609.04836

  23. Wallace S, Coleman M, Bailey A (2008) An investigation of basic facial expression recognition in autism spectrum disorders. Cogn Emot 22(7):1353–1380

    Article  Google Scholar 

  24. Muhammad G et al (2017) A facial-expression monitoring system for improved healthcare in smart cities. IEEE Access 5:10871–10881

    Article  Google Scholar 

  25. Ko BC (2018) A brief review of facial emotion recognition based on visual information. Sensors 18(2):401

    Article  Google Scholar 

  26. Frowd C et al (2009) Towards a comprehensive 3D dynamic facial expression database. 113–119

  27. Matuszewski BJ, Quan W, Shark L-K (2011) High-resolution comprehensive 3-D dynamic database for facial articulation analysis. 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). IEEE

  28. Farnsworth B (2018) Facial Action Coding System (FACS)—a visual guidebook. https://imotions.com/blog/facial-action-coding-system/. Accessed on 2 July (2016)

  29. Singh SP et al (2020) 3D deep learning on medical images: a review. arXiv preprint arXiv:2004.00218

  30. Bello SA et al (2020) Deep learning on 3D point clouds. Remote Sens 12(11):1729

    Article  Google Scholar 

  31. Vretos N, Nikolaidis N, Pitas I (2011) 3D facial expression recognition using Zernike moments on depth images. 2011 18th IEEE International Conference on Image Processing. IEEE

  32. Azazi A et al (2015) Towards a robust affect recognition: automatic facial expression recognition in 3D faces. Expert Syst Appl 42(6):3056–3066

    Article  Google Scholar 

  33. Wang Y, Meng M, Zhen Q (2013) Learning encoded facial curvature information for 3D facial emotion recognition. 2013 Seventh International Conference on Image and Graphics. IEEE

  34. Hariri W, Tabia H, Farah N, Benouareth A, Declercq D (2017) 3D facial expression recognition using kernel methods on Riemannian manifold. Eng Appl Artif Intell 64:25–32

    Article  Google Scholar 

  35. Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P (2017) Geometric deep learning: going beyond euclidean data. IEEE Signal Process Mag 34(4):18–42

    Article  Google Scholar 

  36. Greengard S (2020) Geometric deep learning advances data science. Commun ACM 64(1):13–15

    Article  Google Scholar 

  37. Ye Y, Song Z, Guo J, Qiao Y (2020) SIAT-3DFE: a high-resolution 3D facial expression dataset. IEEE Access 8:48205–48211. https://doi.org/10.1109/ACCESS.2020.2979518

    Article  Google Scholar 

  38. Dao TT (2019) From deep learning to transfer learning for the prediction of skeletal muscle forces. Med Biol Eng Comput 57(5):1049–1058

    Article  Google Scholar 

Download references

Funding

This work was financially supported by Sorbonne Center for Artificial Intelligence (SCAI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tien-Tuan Dao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen, DP., Ho Ba Tho, MC. & Dao, TT. Enhanced facial expression recognition using 3D point sets and geometric deep learning. Med Biol Eng Comput 59, 1235–1244 (2021). https://doi.org/10.1007/s11517-021-02383-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11517-021-02383-1

Keywords

Navigation