Skip to main content

Advertisement

Log in

OQCNN: optimal quantum convolutional neural network for classification of facial expression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Human emotions play a vital role in developing social interaction as well as interpersonal relationships. It is because humans express and convey their thoughts and feelings visually rather than using verbal communication. Recently, facial expression recognition is greatly adopted in the emerging human–computer interaction and human–robot interaction technologies. Although humans can identify facial emotions virtually, the reliable automated recognition of facial expressions becomes a complicated task. The main intention of this paper is to develop an automated facial expression recognition system. To attain the goal, in this work, an improved particle swarm optimization bat tracking search algorithm optimized quantum convolutional neural network (IPSOBSA-QCNN) approach is proposed to accurately identify and classify facial expressions such as fear, happy, disgust, sad, surprise, negative, positive, and dominated. Although classification performance is limited by hyperparameter tuning concerns, the QCNN system efficiently recognizes face expressions at a high processing speed. Therefore, the hyperparameters of the quantum convolutional neural network (QCNN) are optimized to enhance the classification accuracy using the improved particle swarm optimization backtracking search algorithm (IPSOBSA). The proposed IPSOBSA-QCNN approach is evaluated using three diverse datasets namely the real-world affective faces dataset, the Emotic dataset, and the facial expression comparison dataset. The effectiveness of the proposed approach is determined by comparing the proposed IPSOBSA-QCNN approach with other existing methods using different evaluation metrics. The experimental results display that the proposed IPSOBSA-QCNN approach achieves higher classification accuracy of about 98% for the real-world affective faces dataset, 97.5% for the Emotic dataset, and 97% for the facial expression comparison dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Availability of data and material

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

References

  1. Kotsia I, Pitas I (2006) Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process 16(1):172–187

    Article  MathSciNet  Google Scholar 

  2. López-Gil JM, Garay-Vitoria N (2021) Photogram classification-based emotion recognition. IEEE Access 9:136974–136984

    Article  Google Scholar 

  3. Li H, Xu H (2020) Deep reinforcement learning for robust emotional classification in facial expression recognition. Knowl-Based Syst 204:106172

    Article  Google Scholar 

  4. Yang B, Cao J, Ni R, Zhang Y (2017) Facial expression recognition using weighted mixture deep neural network based on double-channel facial images. IEEE access 6:4630–4640

    Article  Google Scholar 

  5. Shakeel PM, Baskar S (2020) Automatic human emotion classification in web document using fuzzy inference system (FIS): human emotion classification. Int J Technol Hum Interact (IJTHI) 16(1):94–104

    Article  Google Scholar 

  6. Hassouneh A, Mutawa AM, Murugappan M (2020) Development of a real-time emotion recognition system using facial expressions and EEG based on machine learning and deep neural network methods. Inform Med Unlocked 20:100372

    Article  Google Scholar 

  7. Magdin M, Prikler F (2018) Real time facial expression recognition using webcam and SDK affectiva. Int J Interact Multimedia Artif Intell 5:7–15

    Google Scholar 

  8. Canedo D, Neves AJ (2019) Facial expression recognition using computer vision: a systematic review. Appl Sci 9(21):4678

    Article  Google Scholar 

  9. Liliana DY (2019) Emotion recognition from facial expression using deep convolutional neural network. In: J phys: conference series Vol. 1193, No. 1, pp 012004). IOP Publishing.

  10. Huang Y, Yang J, Liu S, Pan J (2019) Combining facial expressions and electroencephalography to enhance emotion recognition. Future Internet 11(5):105

    Article  Google Scholar 

  11. Wang Y, Li Y, Song Y, Rong X (2019) The application of a hybrid transfer algorithm based on a convolutional neural network model and an improved convolution restricted Boltzmann machine model in facial expression recognition. IEEE Access 7:184599–184610

    Article  Google Scholar 

  12. Ab Wahab MN, Nazir A, Ren ATZ, Noor MHM, Akbar MF, Mohamed ASA (2021) Efficientnet-lite and hybrid CNN-KNN implementation for facial expression recognition on raspberry pi. IEEE Access 9:134065–134080

    Article  Google Scholar 

  13. Zhang Y, Tsang IW, Li J, Liu P, Lu X, Yu X (2021) Face hallucination with finishing touches. IEEE Trans Image Process 30:1728–1743

    Article  Google Scholar 

  14. Zhang S, Pan X, Cui Y, Zhao X, Liu L (2019) Learning affective video features for facial expression recognition via hybrid deep learning. IEEE Access 7:32297–32304

    Article  Google Scholar 

  15. Sana S, Sruthi G, Suresh D, Rajesh G, Reddy GS (2022) Facial emotion recognition-based music system using convolutional neural networks. Mater Today: Proc 62:4699–4706

    Google Scholar 

  16. Wang X, Chen X, Cao C (2020) Human emotion recognition by optimally fusing facial expression and speech feature. Signal Process: Image Commun 84:115831

    Google Scholar 

  17. Li H, Wang N, Ding X, Yang X, Gao X (2021) Adaptively learning facial expression representation via cf labels and distillation. IEEE Trans Image Process 30:2016–2028

    Article  Google Scholar 

  18. Li H, Wang N, Yang X, Gao X (2022) CRS-CONT: a well-trained general encoder for facial expression analysis. IEEE Trans Image Process 31:4637–4650

    Article  Google Scholar 

  19. Li H, Wang N, Yang X, Wang X, Gao X (2022) Towards semi-supervised deep facial expression recognition with an adaptive confidence margin. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4166–4175.

  20. Wei S, Chen Y, Zhou Z, Long G (2022) A quantum convolutional neural network on NISQ devices. AAPPS Bull 32(1):1–11

    Article  Google Scholar 

  21. Zaman HRR, Gharehchopogh FS (2021) An improved particle swarm optimization with backtracking search optimization algorithm for solving continuous optimization problems. Eng Comput 38(4):2797–2831

    Google Scholar 

  22. Singh P, Chaudhury S, Panigrahi BK (2021) Hybrid MPSO-CNN: Multi-level particle swarm optimized hyperparameters of convolutional neural network. Swarm Evol Comput 63:100863

    Article  Google Scholar 

  23. Vo TH, Lee GS, Yang HJ, Kim SH (2020) Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access 8:131988–132001

    Article  Google Scholar 

  24. Kosti R, Álvarez JM, Recasens A, Lapedriza A (2019) Context based emotion recognition using emotic dataset. IEEE Trans Pattern Anal Mach Intell (PAMI) 42(11):2755–2766

    Google Scholar 

  25. Kosti R, Álvarez JM, Recasens A and Lapedriza A, (2017)Emotion Recognition in Context, Computer Vision and Pattern Recognition (CVPR).

  26. Vemulapalli R, Agarwala A (2019) A compact embedding for facial expression similarity. In: proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp 5683–5692

  27. Fernandes PO, Teixeira JP, Ferreira J, Azevedo S (2013) Training neural networks by resilient backpropagation algorithm for tourism forecasting. Management Intelligent Systems. Springer, Heidelberg, pp 41–49

    Chapter  Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

TS agreed on the content of the study. TS and SS collected all the data for analysis. TS agreed on the methodology. TS and SS completed the analysis based on agreed steps. Results and conclusions are discussed and written together. Both author read and approved the final manuscript.

Corresponding author

Correspondence to T. Sathya.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Human and animal rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Informed consent

For this type of study informed consent is not required.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sathya, T., Sudha, S. OQCNN: optimal quantum convolutional neural network for classification of facial expression. Neural Comput & Applic 35, 9017–9033 (2023). https://doi.org/10.1007/s00521-022-08161-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-08161-w

Keywords

Navigation