Skip to main content
Log in

Modeling and analysis of fatigue detection with multi-channel data fusion

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

To address the problem of accurate assessment of training effects of flight simulators, an intelligent algorithm using neural networks and reinforcement learning is proposed in the paper. The multi-dimensional data of facial expression features and EEG and EM physiological signals are analyzed. A new evaluation model for assessing human spatial balance, attention distribution, neurological weakness, and other pilot training states is studied through facial expression experiments mainly and eye-movement and EEG experiments supplemented. The EEG acquisition and analysis during pilot training subjects (take-off and landing) are completed, and the emotional characteristics of pilots during training are identified. We completed data fusion of multi-dimensional channels, constructed mathematical models of pilot maneuver reaction time and attention allocation, monitored and evaluated flight training effects, and conducted controlled experiments. The experimental results show that the average recognition rates of 92.598% and 87.013% were achieved for expression and neurasthenia recognition, and the human ergonomic information of facial expression and EEG and EM were effectively fused.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Jain V et al (2019) Facial expression recognition using feature level fusion. J Discret Math Sci Cryptogr 22(2):337–350

    Article  MathSciNet  Google Scholar 

  2. Han S et al (2021) From structure to concepts: The two stages of facial expression recognition. Neuropsychologia 150:107700

    Article  Google Scholar 

  3. Zhong Q et al (2021) Facial expression recognition based on facial part attention mechanism. J Electron Imaging 30(3):031206

    Google Scholar 

  4. Mahmoudi MA et al (2020) Learnable pooling weights for facial expression recognition. Pattern Recogn Lett 138:644–650

    Article  Google Scholar 

  5. Murray K et al (2019) EEG findings in posterior reversible encephalopathy syndrome. Clin EEG Neurosci 50(5):366–369

    Article  MathSciNet  Google Scholar 

  6. Meyer MC et al (2020) Adapted cabling of an EEG cap improves simultaneous measurement of EEG and fMRI at 7T. J Neurosci Methods 331:108518

    Article  Google Scholar 

  7. Bian J et al (2019) Efficient hierarchical temporal segmentation method for facial expression sequences. Turk J Electr Eng Comput Sci 27(3):1680–1695

    Article  Google Scholar 

  8. Matt S et al (2019) Human brain detection of natural brief facial expression at a single glance. Neurophysiol Clin 49(3):199

    Article  Google Scholar 

  9. Saeed S, Mahmood MK, Khan YD (2018) An exposition of facial expression recognition techniques. Neural Comput Appl 29(9):425–443

    Article  Google Scholar 

  10. Chen J, Xu R, Liu L (2018) Deep peak-neutral difference feature for facial expression recognition. Multimed Tools Appl 77(22):29871–29887

    Article  Google Scholar 

  11. Danelakis A, Theoharis T, Pratikakis I (2018) Action unit detection in 3 D facial videos with application in facial expression retrieval and recognition. Multimed Tools Appl 77(19):24813–24841

    Article  Google Scholar 

  12. Cao X, Cheng S (2018) The feature extraction of facial expression based on the point distribution model of improved kinect. Int J Wireless Mobile Comput 14(1):78–81

    Article  Google Scholar 

  13. Aguado L et al (2018) Effects of affective and emotional congruency on facial expression processing under different task demands. Acta Physiol (Oxf) 187:66–76

    Google Scholar 

  14. Yu J, Wang Z (2017) A video-based facial motion tracking and expression recognition system. Multimed Tools Appl 76(13):14653–14672

    Article  Google Scholar 

  15. Saha P et al (2016) Mathematical representations of blended facial expressions towards facial expression modeling. Procedia Comput Sci 84:94–98

    Article  Google Scholar 

  16. Zhang W et al (2015) Multimodal learning for facial expression recognition. Pattern Recogn 48(10):3191–3202

    Article  Google Scholar 

  17. Shekari Soleimanloo S et al (2019) Eye-blink parameters detect on-road track-driving impairment following severe sleep deprivation. J Clin Sleep Med 15(9):1271–1284

    Article  Google Scholar 

Download references

Funding

This work was supported by [The National Natural Science Foundation of China] (No. 52072293) and [The National Defense Science and Technology Innovation Zone] (No. ZT001007104).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by [Wenbo Huang], [Changyuan Wang], [Hong-bo Jia], [Pengxiang Xue], and [Li Wang]. The first draft of the manuscript was written by [Wenbo Huang] and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Changyuan Wang.

Ethics declarations

Ethics approval

Not applicable.

Consent to participate

All authors agreed to participate.

Consent for publication

All authors agree to publish.

Competing interests

Author Wenbo Huang has received research support from Xi’an Technological University. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Author Wenbo Huang, Changyuan Wang, Hong-bo Jia, Pengxiang Xue, and Li Wang declare they have no financial interests. The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection: New Intelligent Manufacturing Technologies through the Integration of Industry 4.0 and Advanced Manufacturing

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, W., Wang, C., Jia, Hb. et al. Modeling and analysis of fatigue detection with multi-channel data fusion. Int J Adv Manuf Technol 122, 291–301 (2022). https://doi.org/10.1007/s00170-022-09364-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-022-09364-0

Keywords

Navigation