Abstract
Electromyography (EMG) is a method to measure muscle activity. Physicians also use EMG to study the function of facial muscles through intensity maps (IMs) to support diagnostics and research. However, many existing visualizations neglect anatomical structures and disregard the physical properties of EMG signals. The variance of facial structures between people complicates the generalization of IMs, which is crucial for their correct interpretation. In our work, we overcome these issues by introducing a pipeline to generate anatomically correct IMs for facial muscles. An IM generation algorithm based on a template model incorporates custom surface EMG schemes and combines them with a projection method to highlight the IMs on the patient’s face in 2D and 3D. We evaluate the generated and projected IMs based on their correct projection quality for six base emotions on several subjects. These visualizations deepen the understanding of muscle activity areas and indicate that a holistic view of the face could be necessary to understand facial muscle activity. Medical experts can use our approach to study the function of facial muscles and to support diagnostics and therapy.
Supported by Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) project 427899908 BRIDGING THE GAP: MIMICS AND MUSCLES (DE 735/15-1 and GU 463/12-1).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
All shown individuals agreed to have their images published in terms with the GDPR.
- 3.
We support LookingGlass Portrait (Looking Glass Factory Inc., New York, USA) natively in our pipeline.
References
Barrett, L.F., Adolphs, R., Marsella, S., Martinez, A.M., Pollak, S.D.: Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20(1), 1–68 (2019). https://doi.org/10.1177/1529100619832930
Benitez-Quiroz, C.F., Srinivasan, R., Martinez, A.M.: Facial color is an efficient mechanism to visually transmit emotion. Proc. Natl. Acad. Sci. 115(14), 3581–3586 (2018). https://doi.org/10.1073/pnas.1716084115
Büchner, T., Sickert, S., Volk, G.F., Anders, C., Guntinas-Lichius, O., Denzler, J.: Let’s get the FACS straight - reconstructing obstructed facial features. In: International Conference on Computer Vision Theory and Applications (VISAPP), pp. 727–736. SciTePress (2023). https://doi.org/10.5220/0011619900003417
Cowen, A., Sauter, D., Tracy, J.L., Keltner, D.: Mapping the Passions: toward a high-dimensional taxonomy of emotional experience and expression. Psychol. Sci. Public Interest 20(1), 69–90 (2019). https://doi.org/10.1177/1529100619850176
Dasgupta, A., Poco, J., Rogowitz, B., Han, K., Bertini, E., Silva, C.T.: The effect of color scales on climate scientists’ objective and subjective performance in spatial data analysis tasks. IEEE Trans. Visual Comput. Graph. 26(3), 1577–1591 (2020). https://doi.org/10.1109/TVCG.2018.2876539
Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992). https://doi.org/10.1080/02699939208411068
Ekman, P., Friesen, W.: Facial action coding system: a technique for the measurement of facial movement. Palo Alto: Consult. Psychol. Press (1978). https://doi.org/10.1037/t27734-000
Elfenbein, H.A., Ambady, N.: On the universality and cultural specificity of emotion recognition: a meta-analysis. Psychol. Bull. 128(2), 203–235 (2002). https://doi.org/10.1037/0033-2909.128.2.203
Fasshauer, G.E.: Meshfree approximation methods with matlab: (With CD-ROM), Interdisciplinary Mathematical Sciences, vol. 6. World Scientific (2007). https://doi.org/10.1142/6437
Feng, Y., Feng, H., Black, M.J., Bolkart, T.: Learning an animatable detailed 3D face model from in-the-wild images. ACM Trans. Graph. 40(4), 1–13 (2021). https://doi.org/10.1145/3450626.3459936
Fridlund, A.J., Cacioppo, J.T.: Guidelines for human electromyographic research. Psychophysiology 23(5), 567–589 (1986). https://doi.org/10.1111/j.1469-8986.1986.tb00676.x
Guo, J., Zhu, X., Yang, Y., Yang, F., Lei, Z., Li, S.Z.: Towards fast, accurate and stable 3d dense face alignment. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 152–168. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_10
Kartynnik, Y., Ablavatski, A., Grishchenko, I., Grundmann, M.: Real-time facial surface geometry from monocular video on mobile GPUs. arXiv:1907.06724 (2019)
Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. The Eurographics Association (2006). https://doi.org/10.2312/SGP/SGP06/061-070
Kloeckner, A., et al.: MeshPy (2022). https://doi.org/10.5281/zenodo.7296572
Kuramoto, E., Yoshinaga, S., Nakao, H., Nemoto, S., Ishida, Y.: Characteristics of facial muscle activity during voluntary facial expressions: imaging analysis of facial expressions based on myogenic potential data. Neuropsychopharmacol. Rep. 39(3), 183–193 (2019). https://doi.org/10.1002/npr2.12059
Loyo, M., McReynold, M., Mace, J.C., Cameron, M.: Protocol for randomized controlled trial of electric stimulation with high-volt twin peak versus placebo for facial functional recovery from acute Bell’s palsy in patients with poor prognostic factors. J. Rehabil. Assist. Technol. Eng. 7, 2055668320964142 (2020). https://doi.org/10.1177/2055668320964142
Lugaresi, C., et al.: MediaPipe: a framework for building perception pipelines (2019). https://doi.org/10.48550/arXiv.1906.08172
Mueller, N., Trentzsch, V., Grassme, R., Guntinas-Lichius, O., Volk, G.F., Anders, C.: High-resolution surface electromyographic activities of facial muscles during mimic movements in healthy adults: a prospective observational study. Front. Hum. Neurosci. 16, 1029415 (2022). https://doi.org/10.3389/fnhum.2022.1029415
Paysan, P., Knothe, R., Amberg, B., Romdhani, S., Vetter, T.: A 3D face model for pose and illumination invariant face recognition. In: 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 296–301. IEEE (2009). https://doi.org/10.1109/AVSS.2009.58
Ranjan, A., Bolkart, T., Sanyal, S., Black, M.J.: Generating 3D faces using convolutional mesh autoencoders. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 725–741. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_43
Robertson, D.G.E., Caldwell, G.E., Hamill, J., Kamen, G., Whittlesey, S.N.: Research Methods in Biomechanics. Illinois, second edn., Human Kinetics, Champaign (2014)
Schaede, R.A., Volk, G.F., Modersohn, L., Barth, J.M., Denzler, J., Guntinas-Lichius, O.: Video instruction for synchronous video recording of mimic movement of patients with facial palsy. Laryngorhinootologie 96(12), 844–849 (2017). https://doi.org/10.1055/s-0043-101699
Shewchuk, J.R.: Triangle: engineering a 2D quality mesh generator and Delaunay triangulator. In: Lin, M.C., Manocha, D. (eds.) WACG 1996. LNCS, vol. 1148, pp. 203–222. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0014497
Taubin, G.: Curve and surface smoothing without shrinkage. In: Proceedings of IEEE International Conference on Computer Vision, pp. 852–857 (1995). https://doi.org/10.1109/ICCV.1995.466848
Volk, G.F., Leier, C., Guntinas-lichius, O.: Correlation between electromyography and quantitative ultrasonography of facial muscles in patients with facial palsy. Muscle Nerve 53(5), 755–761 (2016)
Vollmer, J., Mencl, R., Müller, H.: Improved Laplacian smoothing of noisy surface meshes. Comput. Graph. Forum 18(3), 131–138 (1999). https://doi.org/10.1111/1467-8659.00334
Wahba, G.: Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics (1990). https://doi.org/10.1137/1.9781611970128
Zhu, X., Liu, X., Lei, Z., Li, S.Z.: Face alignment in full pose range: a 3d total solution. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 78–92 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Büchner, T., Sickert, S., Graßme, R., Anders, C., Guntinas-Lichius, O., Denzler, J. (2023). Using 2D and 3D Face Representations to Generate Comprehensive Facial Electromyography Intensity Maps. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14362. Springer, Cham. https://doi.org/10.1007/978-3-031-47966-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-47966-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47965-6
Online ISBN: 978-3-031-47966-3
eBook Packages: Computer ScienceComputer Science (R0)