Advertisement

Using Big Data for Emotionally Intelligent Mobile Services through Multi-Modal Emotion Recognition

  • Yerzhan Baimbetov
  • Ismail Khalil
  • Matthias SteinbauerEmail author
  • Gabriele Anderst-Kotsis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9102)

Abstract

Humans express and perceive emotional states in multi-modal fashion such as facial, acoustic expressions, gesture, and posture. Our task as AI researchers is to give computers the ability to communicate with users in consideration of their emotions. In recognition of a subject’s emotions, it is significantly important to be aware of the emotional context. Thanks to the advancement of mobile technology, it is feasible to collect contextual data. In this paper, the authors descibe the first step to extract insightful emotional information using cloud-based Big Data infrastructure. Relevant aspects of emotion recognition and challenges that come with multi-modal emotion recognition are also discussed.

Keywords

Emotion recognition Big data Context awareness Affective computing Machine learning Intelligent systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    El Ayadi, M., Kamel, M.S., Karray, F.: Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognition 44(3), 572–587 (2011)CrossRefzbMATHGoogle Scholar
  2. 2.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(1), 39–58 (2009)CrossRefGoogle Scholar
  3. 3.
    Berthelon, F., Sander, P.: Emotion ontology for context awareness. In: Cognitive Infocommunications (CogInfoCom), 2013 IEEE 4th International Conference on Cognitive Infocommunications, pp. 59–64, 2–5 Dec. 2013Google Scholar
  4. 4.
    Schachter, S., Singer, J.: Cognitive, social and physiological determinants of emotional state. Psychological Review (1979)Google Scholar
  5. 5.
    Pavaloi, I., Musca, E., Rotaru, F.: Emotion recognition in audio records. Signals, Circuits and Systems (ISSCS), pp. 1–4, 11–12 July 2013Google Scholar
  6. 6.
    Asawa, K., Verma, V., Agrawal, A.: Recognition of vocal emotions from acoustic profile. In: Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI 2012), pp. 710-716. ACM, New York, NY, USAGoogle Scholar
  7. 7.
    Pantic, M., Sebe, N., Cohn, J.F., Huang, T.: Affective multi-modal human-computer interaction. In: Proceedings of the 13th annual ACM international conference on Multimedia (MULTIMEDIA 2005), pp. 669–676. ACM, New York, NY, USA (2005)Google Scholar
  8. 8.
    An Oracle White Paper: Oracle Information Architecture: An Architect’s Guide to Big Data. August 2012Google Scholar
  9. 9.
    O’Reilly Media: Big Data New, 2012th edn. O’Reilly Media Inc., Sebastopol, CA (2012)Google Scholar
  10. 10.
    Krcadinac, U., Pasquier, P., Jovanovic, J., Devedzic, V.: Synesketch: An Open Source Library for Sentence-Based Emotion Recognition. IEEE Transactions on Affective Computing 4(3), 312–325 (2013)CrossRefGoogle Scholar
  11. 11.
    Kao, E.C.-C., Chun-Chieh, L., Yang, T.-H., Hsieh, C.-T., Soo, V.-W.: Towards Text-based Emotion Detection A Survey and Possible Improvements. In: Information Management and Engineering, 2009, ICIME 2009, International Conference on Information Management and Engineering, pp. 70–74, 3–5 April 2009Google Scholar
  12. 12.
    Ekman, P., Friesen, W.V., Ancoli, S.: Facial signs of emotional experience. Journal of Personality and Social Psychology 19(6), 1123–1134 (1980)Google Scholar
  13. 13.
    Metri, P., Ghorpade, J., Butalia, A.: Facial Emotion Recognition Using Context Based Multimodal Approach. International Journal of Artificial Intelligence and Interactive Multimedia 1(4) (2011)Google Scholar
  14. 14.
    Ekman, P.: Universals and cultural differences in facial expressions of emotions. Nebraska Symposium on Motivation 19, 207–283 (1972)Google Scholar
  15. 15.
    Zeng, Z., Tu, J., Pianfetti, B., Liu, M., Zhang, T., Zhang, Z., Huang, T.S., Levinson, S.: Audio-visual affect recognition through multi-stream fused HMM for HCI. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2, pp. 967–972, 20–25 June 2005Google Scholar
  16. 16.
    Khanna, P., Sasikumar, M.: Rule based system for recognizing emotions using multi-modal approach. (IJACSA) International Journal of Advanced Computer Science and Applications 4(7) (2013)Google Scholar
  17. 17.
    Niforatos, E., Karapanos, E.: EmoSnaps: A Mobile Application for Emotion Recall from Facial Expressions. CHI 2013, April 27 - May 2, 2013, Paris, FranceGoogle Scholar
  18. 18.
    Oh, K., Park, H.-S., Cho, S.-B.: A Mobile Context Sharing System Using Activity and Emotion Recognition with Bayesian Networks. In: 2010 7th International Conference Ubiquitous Intelligence and Computing and 7th International Conference on Autonomic and Trusted Computing (UIC/ATC), pp. 244–249, 26–29 Oct. 2010Google Scholar
  19. 19.
    Revelle, W., Scherer, K.R.: Personality and Emotion. In the Oxford Companion to the Affective Sciences, Oxford University Press (2010)Google Scholar
  20. 20.
    Fischer, A.H., Manstead, A.S.R., Zaalberg, R.: Socail influence on the emotion process. European Review of Social Psychology 14(1), 171–202 (2003)CrossRefGoogle Scholar
  21. 21.
    Funf - Open Sensing Framework. http://www.funf.org
  22. 22.
  23. 23.
    Wagner, J., Andre, E., Jung, F.: Smart sensor integration: A framework for multi-modal emotion recognition in real-time. In: ACII 2009, 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–8, 10–12 Sept. 2009Google Scholar
  24. 24.
    Lingenfelser, F., Wagner, J., André, E.: A systematic discussion of fusion techniques for multi-modal affect recognition tasks. In: Proceedings of the 13th international conference on multi-modal interfaces (ICMI 2011). ACM, New York, NY, USA, 19–26Google Scholar
  25. 25.
    Tang, K., Tie, Y., Yang, T., Guan, L.: Multimodal emotion recognition (MER) system. In: 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–6, 4–7 May 2014Google Scholar
  26. 26.
    Shortiffe, E.H., Buchanan, B.G.: A Multimodal of Inexact Reasoning in Medicine. Mathematical Bioscience 23, 351–379 (1975)CrossRefGoogle Scholar
  27. 27.
    Steinbauer, M., Kotsis, G.: Building an Information System for Reality Mining Based on Communication Traces. NBiS, 2012, pp. 306–310Google Scholar
  28. 28.
    Mousannif, H., Khalil, I.: The human face of mobile. In: Linawati, Mahendra, M.S., Neuhold, E.J., Tjoa, A.M., You, I. (eds.) ICT-EurAsia 2014. LNCS, vol. 8407, pp. 1–20. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  29. 29.
    Demchenko, Y., de Laat, C., Membrey, P.: Defining architecture components of the Big Data Ecosystem. In: 2014 International Conference on Collaboration Technologies and Systems (CTS), pp. 104–112, 19–23 May 2014Google Scholar
  30. 30.
    Demchenko, Y., Grosso, P., de Laat, C., Membrey, P.: Addressing big data issues in Scientific Data Infrastructure. In: 2013 International Conference on Collaboration Technologies and Systems (CTS), pp. 48–55, 20–24 May 2013Google Scholar
  31. 31.
    Welcome to Apache Hadoop! http://hadoop.apache.org/
  32. 32.
  33. 33.
  34. 34.
    NIST Big Data Working Group (NBD-WG). http://bigdatawg.nist.gov/home.php
  35. 35.
    Big Data Reference Architecture. NBD-WG. NIST. http://bigdatawg.nist.gov/_uploadfiles/M0226_v10_1554566513.docx

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Yerzhan Baimbetov
    • 1
  • Ismail Khalil
    • 1
  • Matthias Steinbauer
    • 1
    Email author
  • Gabriele Anderst-Kotsis
    • 1
  1. 1.Department of TelecooperationJohannes Kepler University of LinzLinzAustria

Personalised recommendations