Abstract
To address the problem of accurate assessment of training effects of flight simulators, an intelligent algorithm using neural networks and reinforcement learning is proposed in the paper. The multi-dimensional data of facial expression features and EEG and EM physiological signals are analyzed. A new evaluation model for assessing human spatial balance, attention distribution, neurological weakness, and other pilot training states is studied through facial expression experiments mainly and eye-movement and EEG experiments supplemented. The EEG acquisition and analysis during pilot training subjects (take-off and landing) are completed, and the emotional characteristics of pilots during training are identified. We completed data fusion of multi-dimensional channels, constructed mathematical models of pilot maneuver reaction time and attention allocation, monitored and evaluated flight training effects, and conducted controlled experiments. The experimental results show that the average recognition rates of 92.598% and 87.013% were achieved for expression and neurasthenia recognition, and the human ergonomic information of facial expression and EEG and EM were effectively fused.
Similar content being viewed by others
References
Jain V et al (2019) Facial expression recognition using feature level fusion. J Discret Math Sci Cryptogr 22(2):337–350
Han S et al (2021) From structure to concepts: The two stages of facial expression recognition. Neuropsychologia 150:107700
Zhong Q et al (2021) Facial expression recognition based on facial part attention mechanism. J Electron Imaging 30(3):031206
Mahmoudi MA et al (2020) Learnable pooling weights for facial expression recognition. Pattern Recogn Lett 138:644–650
Murray K et al (2019) EEG findings in posterior reversible encephalopathy syndrome. Clin EEG Neurosci 50(5):366–369
Meyer MC et al (2020) Adapted cabling of an EEG cap improves simultaneous measurement of EEG and fMRI at 7T. J Neurosci Methods 331:108518
Bian J et al (2019) Efficient hierarchical temporal segmentation method for facial expression sequences. Turk J Electr Eng Comput Sci 27(3):1680–1695
Matt S et al (2019) Human brain detection of natural brief facial expression at a single glance. Neurophysiol Clin 49(3):199
Saeed S, Mahmood MK, Khan YD (2018) An exposition of facial expression recognition techniques. Neural Comput Appl 29(9):425–443
Chen J, Xu R, Liu L (2018) Deep peak-neutral difference feature for facial expression recognition. Multimed Tools Appl 77(22):29871–29887
Danelakis A, Theoharis T, Pratikakis I (2018) Action unit detection in 3 D facial videos with application in facial expression retrieval and recognition. Multimed Tools Appl 77(19):24813–24841
Cao X, Cheng S (2018) The feature extraction of facial expression based on the point distribution model of improved kinect. Int J Wireless Mobile Comput 14(1):78–81
Aguado L et al (2018) Effects of affective and emotional congruency on facial expression processing under different task demands. Acta Physiol (Oxf) 187:66–76
Yu J, Wang Z (2017) A video-based facial motion tracking and expression recognition system. Multimed Tools Appl 76(13):14653–14672
Saha P et al (2016) Mathematical representations of blended facial expressions towards facial expression modeling. Procedia Comput Sci 84:94–98
Zhang W et al (2015) Multimodal learning for facial expression recognition. Pattern Recogn 48(10):3191–3202
Shekari Soleimanloo S et al (2019) Eye-blink parameters detect on-road track-driving impairment following severe sleep deprivation. J Clin Sleep Med 15(9):1271–1284
Funding
This work was supported by [The National Natural Science Foundation of China] (No. 52072293) and [The National Defense Science and Technology Innovation Zone] (No. ZT001007104).
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by [Wenbo Huang], [Changyuan Wang], [Hong-bo Jia], [Pengxiang Xue], and [Li Wang]. The first draft of the manuscript was written by [Wenbo Huang] and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval
Not applicable.
Consent to participate
All authors agreed to participate.
Consent for publication
All authors agree to publish.
Competing interests
Author Wenbo Huang has received research support from Xi’an Technological University. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Author Wenbo Huang, Changyuan Wang, Hong-bo Jia, Pengxiang Xue, and Li Wang declare they have no financial interests. The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection: New Intelligent Manufacturing Technologies through the Integration of Industry 4.0 and Advanced Manufacturing
Rights and permissions
About this article
Cite this article
Huang, W., Wang, C., Jia, Hb. et al. Modeling and analysis of fatigue detection with multi-channel data fusion. Int J Adv Manuf Technol 122, 291–301 (2022). https://doi.org/10.1007/s00170-022-09364-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00170-022-09364-0