Skip to main content
Log in

Hybrid multi-modal emotion recognition framework based on InceptionV3DenseNet

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Emotion recognition is one of the most complex research areas as individuals express emotional cues based on several modalities such as audio, facial expressions, and language. The recognition of emotion from one of the modalities is not always feasible as the single modalities are disturbed by several factors. The existing models cannot attain the maximum accuracy in exactly identifying the expressions of individuals. In this paper, a novel hybrid multi-modal emotion recognition framework InceptionV3DenseNet is proposed for improving the recognition accuracy. Initially contextual features are extracted from different modalities such as video, audio and text. From the video modality, the features such as shot length, lighting key, motion and color are extracted. Zero-crossing rate, Mel frequency cepstral coefficient (MFCC), energy and pitch are extracted from the audio modality and the unigram, bigram and TF-IDF are extracted from the textual modality. In feature extraction, high level features are extracted with better generalization capability. The extracted features are fused using the multi-set integrated canonical correlation analysis (MICCA) and are provided as the input to the proposed hybrid network model. It detects the correlation between multimodal features to provide better performance with single learning phase. Then the proposed hybrid deep learning model is utilized to classify emotional states by considering the accuracy and reliability. The work simulations are conducted in the MATLAB platform and evaluated using the MELD and RAVDESS datasets. The outcomes proved that the proposed model is more efficient and accurate than the compared models and attained an overall accuracy rate of 74.87% in MELD and 95.25% in RAVDESS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Data availability

Data sharing not applicable to this article.

References

  1. Abdullah SMSA, Ameen SYA, Sadeeq MAM, Zeebaree S (2021) Multi-modal emotion recognition using deep learning. J Appl Sci Technol Trends 2(02):52–58

    Article  Google Scholar 

  2. Bastanfard A, Aghaahmadi M, Fazel M, Moghadam M (2009) Persian viseme classification for developing visual speech training application. In Pacific-Rim Conference on Multimedia, Springer, Berlin, Heidelberg, 1080–1085

  3. Bastanfard A, Amirkhani D, Hasani M (2019) Increasing the accuracy of automatic speaker age estimation by using multiple UBMs. In 2019 5th conference on knowledge based engineering and innovation (KBEI), IEEE 592–598

  4. Cevher D, Zepf S, Klinger R (2019) Towards multi-modal emotion recognition in german speech events in cars using transfer learning. arXiv preprint arXiv:1909.02764

  5. Chang X, Skarbek W (2021) Multi-modal residual perceptron network for audio–video emotion recognition. Sensors 21(16):5452

    Article  Google Scholar 

  6. Cimtay Y, Ekmekcioglu E, Caglar-Ozhan S (2020) Cross-subject multi-modal emotion recognition based on hybrid fusion. IEEE Access 8:168865–168878

    Article  Google Scholar 

  7. Correa NM, Eichele T, Adalı T, Li Y-O, Calhoun VD (2010) Multi-set canonical correlation analysis for the fusion of concurrent single trial ERP and functional MRI. Neuroimage 50(4):1438–1445

    Article  Google Scholar 

  8. Dai W, Liu Z, Yu T, Fung P (2020) Modality-transferable emotion embeddings for low-resource multi-modal emotion recognition. arXiv preprint arXiv:2009.09629

  9. Granger E, Cardinal P (2021) Cross attentional audio-visual fusion for dimensional emotion recognition. arXiv preprint arXiv:2111.05222

  10. Guo J-J, Zhou R, Zhao L-M, Lu B-L (2019) Multi-modal emotion recognition from eye image, eye movement and eeg using deep neural networks. In 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), 3071–3074

  11. Hashim FA, Houssein EH, Hussain K, Mabrouk MS, Al-Atabany W (2022) Honey badger algorithm: new metaheuristic algorithm for solving optimization problems. Math Comput Simul 192:84–110

    Article  MathSciNet  MATH  Google Scholar 

  12. He Z, Li Z, Yang F, Wang L, Li J, Zhou C, Pan J (2020) Advances in multi-modal emotion recognition based on brain–computer interfaces. Brain Sci 10(10):687

    Article  Google Scholar 

  13. Ho N-H, Yang H-J, Kim S-H, Lee G (2020) Multi-modal approach of speech emotion recognition using multi-level multi-head fusion attention-based recurrent neural network. IEEE Access 8:61672–61686

    Article  Google Scholar 

  14. Huan R-H, Shu J, Bao S-L, Liang R-H, Chen P, Chi K-K (2021) Video multi-modal emotion recognition based on bi-GRU and attention fusion. Multimed Tools Appl 80(6):8213–8240

    Article  Google Scholar 

  15. Huang H, Hu Z, Wang W, Wu M (2019) Multi-modal emotion recognition based on ensemble convolutional neural network. IEEE Access 8:3265–3271

    Article  Google Scholar 

  16. Li Y, Ishi CT, Inoue K, Nakamura S, Kawahara T (2019) Expressing reactive emotion based on multi-modal emotion recognition for natural conversation in human–robot interaction. Adv Robot 33(20):1030–1041

    Article  Google Scholar 

  17. Li J-L, Lee C-C (2019) Attentive to individual: a multimodal emotion recognition network with personalized attention profile. In Interspeech 211–215

  18. Liu D, Chen L, Wang Z, Diao G (2021) Speech expression multimodal emotion recognition based on deep belief network. J Grid Comput 19(2):1–13

    Article  Google Scholar 

  19. Liu W, Qiu J-L, Zheng W-L, Lu B-L (2019) Multi-modal emotion recognition using deep canonical correlation analysis. arXiv preprint arXiv:1908.05349

  20. Mahdavi R, Bastanfard A, Amirkhani D (2020) Persian accents identification using modeling of speech articulatory features. In 2020 25th international computer conference, Computer Society of Iran (CSICC) 1–9

  21. Mittal T, Bhattacharya U, Chandra R, Bera A, Manocha D (2020) M3er: multiplicative multi-modal emotion recognition using facial, textual, and speech cues. Proc AAAI Conf Artif Intell 34(02):1359–1367

    Google Scholar 

  22. Nemati S, Rohani R, Basiri ME, Abdar M, Yen NY, Makarenkov V (2019) A hybrid latent space data fusion method for multi-modal emotion recognition. IEEE Access 7:172948–172964

    Article  Google Scholar 

  23. Panda D, Chakladar DD, Dasgupta T (2020) Multi-modal system for emotion recognition using EEG and customer review. In Proceedings of the Global AI Congress 2019 Springer, Singapore, 399–410

  24. Radoi A, Birhala A, Ristea N-C, Dutu L-C (2021) An end-to-end emotion recognition framework based on temporal aggregation of multimodal information. IEEE Access 9:135559–135570

    Article  Google Scholar 

  25. Rahdari F, Rashedi E, Eftekhari M (2019) A multi-modal emotion recognition system using facial landmark analysis. Iran J Sci Technol Trans Electr Eng 43(1):171–189

    Article  Google Scholar 

  26. Savargiv M, Bastanfard A (2013) Text material design for fuzzy emotional speech corpus based on Persian semantic and structure. In 2013 international conference on fuzzy theory and its applications (iFUZZY), IEEE 380–384

  27. Savargiv M, Bastanfard A (2015) Persian speech emotion recognition. In 2015 7th conference on information and knowledge technology (IKT), IEEE 1–5

  28. Savargiv M, Bastanfard A (2016) Real-time speech emotion recognition by minimum number of features. In 2016 Artificial intelligence and robotics (IRANOPEN), IEEE 72–76

  29. Shahin I, Hindawi N, Nassif AB, Alhudhaif A, Polat K (2022) Novel dual-channel long short-term memory compressed capsule networks for emotion recognition. Expert Syst Appl Elsevier 188:116080

    Article  Google Scholar 

  30. Siddiqui MFH, Javaid AY (2020) A multimodal facial emotion recognition framework through the fusion of speech with visible and infrared images. Multimodal Technol Interact 4(3):46

    Article  Google Scholar 

  31. Singh P, Srivastava R, Rana KPS, Kumar V (2021) A multimodal hierarchical approach to speech emotion recognition from audio and text. Knowl Based Syst Elsevier 229:107316

    Article  Google Scholar 

  32. Siriwardhana S, Kaluarachchi T, Billinghurst M, Nanayakkara S (2020) Multi-modal emotion recognition with transformer-based self supervised feature fusion. IEEE Access 8:176274–176285

    Article  Google Scholar 

  33. Veni S, Anand R, Mohan D, Paul E (2021) Feature fusion in multimodal emotion recognition system for enhancement of human-machine interaction. In IOP conference series: materials science and engineering, IOP publishing, 1084(1): 012004

  34. Xie B, Sidulova M, Park CH (2021) Robust multi-modal emotion recognition from conversation with transformer-based crossmodality fusion. Sensors 21(14):4913

    Article  Google Scholar 

  35. Xu N, Mao W, Chen G (2019) Multi-interactive memory network for aspect based multi-modal sentiment analysis. Proc AAAI Conf Artif Intell 33(01):371–378

    Google Scholar 

  36. Xu H, Zhang H, Han K, Wang Y, Peng Y, Li X (2019) Learning alignment for multi-modal emotion recognition from speech. arXiv preprint arXiv:1909.05645

  37. Yalamanchili B, Dungala K, Mandapati K, Pillodi M, Vanga SR (2021) Survey on multi-modal emotion recognition (MER) systems. In machine learning technologies and applications: proceedings of ICACECS 2020, springer Singapore, 319–326

  38. Yin G, Sun S, Yu D, Li D, Zhang K (2022) A multimodal framework for large-scale emotion recognition by fusing music and electrodermal activity signals. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), dl.acm.org, 18(3):1–23

  39. Yu C, Tapus A (2019) Interactive robot learning for multi-modal emotion recognition. In International Conference on Social Robotics, Springer, Cham, 633–642

  40. Yuan Y-H, Sun Q-S, Zhou Q, Xia D-S (2011) A novel multi-set integrated canonical correlation analysis framework and its application in feature fusion. Pattern Recogn 44(5):1031–1040

    Article  MATH  Google Scholar 

  41. Zhang H (2020) Expression-EEG based collaborative multi-modal emotion recognition using deep autoencoder. IEEE Access 8:164130–164143

    Article  Google Scholar 

  42. Zhang G, Luo T, Pedrycz W, El-Meligy MA, Sharaf MAF, Li Z (2020) Outlier processing in multi-modal emotion recognition. IEEE Access 8:55688–55701

    Article  Google Scholar 

  43. Zhao Y, Chen D (2021) Expression EEG Multimodal Emotion Recognition Method Based on the Bidirectional LSTM and Attention Mechanism Computational and Mathematical Methods in Medicine 2021

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fakir Mashuque Alamgir.

Ethics declarations

Conflict of interest

Authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alamgir, F.M., Alam, M.S. Hybrid multi-modal emotion recognition framework based on InceptionV3DenseNet. Multimed Tools Appl 82, 40375–40402 (2023). https://doi.org/10.1007/s11042-023-15066-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15066-w

Keywords

Navigation