Advertisement

Emotional computing based on cross-modal fusion and edge network data incentive

  • Lei Ma
  • Feng Ju
  • Jing Wan
  • Xiaoyan ShenEmail author
Original Article

Abstract

In large-scale emotional events and complex emotional recognition applications, how to improve the recognition accuracy, computing efficiency, and user experience quality becomes the first problem to be solved. Aiming at the above problems, this paper proposes an emotional computing algorithm based on cross-modal fusion and edge network data incentive. In order to improve the efficiency of emotional data collection and the accuracy of emotional recognition, deep cross-modal fusion can capture the semantic deviation between multi-modal and design fusion methods through non-linear cross-layer mapping. In this paper, a deep fusion cross-modal data fusion method is designed. In order to improve the computational efficiency and user experience quality, a data incentive algorithm for edge network is designed in this paper, based on the overlapping delay gaps and incentive weights of large data collection and error detection. Finally, the edge network is mapped to a finite data set space from the set of emotional data elements inspired by heterogeneous emotional events. In this space, all emotional events and emotional data elements are balanced. In this paper, an emotional computing algorithm based on cross-modal data fusion is designed. The results of simulation experiments and theoretical analysis show that the proposed algorithm is superior to the edge network data incentive algorithm and the cross-modal data fusion algorithm in recognition accuracy, complex emotion recognition efficiency, and computation efficiency and delay.

Keywords

Emotional computing Cross-modal fusion Edge network Data incentive Emotional recognition 

Notes

References

  1. 1.
    Ren G, Zhang X, Duan S (2018) Articulatory-acoustic analyses of mandarin words in emotional context speech for smart campus[J]. IEEE Access 6:48418–48427CrossRefGoogle Scholar
  2. 2.
    Laurence Likforman-Sulem, Anna Esposito, Marcos Faundez-Zanuy et.al EMOTHAW: a novel database for emotional state recognition from handwriting and drawing[J]. IEEE Trans Human-Machine Syst 2017, 47(2): 273–284Google Scholar
  3. 3.
    Zhou Q (2018) Multi-layer affective computing model based on emotional psychology[J]. Electron Commer Res 18(1):109–124CrossRefGoogle Scholar
  4. 4.
    Dumitras T, Prakash BA, Subrahmanian VS et al (2017) Understanding the relationship between human behavior and susceptibility to cyber attacks: a data-driven approach[J]. ACM Trans Intell Syst Technol 8(4):1–25Google Scholar
  5. 5.
    Wang C-H|L, Koong H-C (2018) Emotional design tutoring system based on multimodal affective computing techniques.[J]. Int J Dist Educ Technol 16(1):103–117CrossRefGoogle Scholar
  6. 6.
    Peng Y, Qi J, Xin H et al (2017) CCL: cross-modal correlation learning with multigrained fusion by hierarchical network[J]. IEEE Trans Multimedia 20(2):405–420CrossRefGoogle Scholar
  7. 7.
    Li G, Gan Y, Wu H, Xiao N, Lin L (2018) Cross-modal attentional context learning for RGB-D object detection[J]. IEEE Trans Image Process 28(4):1591–1601MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Vukotić V, Raymond C, Gravier G (2018) A crossmodal approach to multimodal fusion in video hyperlinking[J]. IEEE Multimedia 25(2):11–23CrossRefGoogle Scholar
  9. 9.
    Tse WK (2016) Coherent magneto-optical effects in topological insulators: excitation near the absorption edge[J]. Phys Rev B 94(12):125430CrossRefGoogle Scholar
  10. 10.
    Ng TS, Wang S (2017) Recycling systems design using reservation incentive data[J]. J Oper Res Soc 68(10):1–23CrossRefGoogle Scholar
  11. 11.
    Dan P, Fan W, Chen G (2018) Data quality guided incentive mechanism design for crowdsensing[J]. IEEE Trans Mob Comput 17(2):307–319CrossRefGoogle Scholar
  12. 12.
    Lo LY, Li WO, Lee LP et al (2018) Running in fear: an investigation into the dimensional account of emotion in discriminating emotional expressions[J]. Cogn Process 19(3):1–11Google Scholar
  13. 13.
    Xu B, Fu Y, Jiang YG, Li B, Sigal L (2018) Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization[J]. IEEE Trans Affect Comput 9(2):255–270CrossRefGoogle Scholar
  14. 14.
    Dean J. Krusienski, Guoxu Zhou, Jing Jin et.al Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI[J]. IEEE Trans Neural Syst Rehabil Eng 2016, 24(5): 532–541Google Scholar
  15. 15.
    Rayn Sakaguchi, Kenneth D. Morton, Leslie M. Collins, et.al. A comparison of feature representations for explosive threat detection in ground penetrating radar data[J]. IEEE Trans Geosci Remote Sens 2017, 55(12): 6736–6745Google Scholar
  16. 16.
    Liu Y, Liu Y, Ding L (2018) Scene classification based on two-stage deep feature fusion[J]. IEEE Geosci Remote Sens Lett 15(2):183–186CrossRefGoogle Scholar
  17. 17.
    Yong Ye, Chunlong He,Bin Liao et.al Capacitive proximity sensor array with a simple high sensitivity capacitance measuring circuit for human–computer interaction[J]. IEEE Sensors J, 2018, 18(14):5906–5914Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Information Science and TechnologyNantong UniversityNantongChina
  2. 2.Nantong Rail Transit Co. LTDNantongChina

Personalised recommendations