Advertisement

An Online Fuzzy-Based Approach for Human Emotions Detection: An Overview on the Human Cognitive Model of Understanding and Generating Multimodal Actions

  • Amir Aly
  • Adriana Tapus
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 106)

Abstract

An intelligent robot needs to be able to understand human emotions, and to understand and generate actions through cognitive systems that operate in a similar way to human cognition. In this chapter, we mainly focus on developing an online incremental learning system of emotions using Takagi-Sugeno (TS) fuzzy model. Additionally, we present a general overview for understanding and generating multimodal actions from the cognitive point of view. The main objective of this system is to detect whether the observed emotion needs a new corresponding multimodal action to be generated in case it constitutes a new emotion cluster not learnt before, or it can be attributed to one of the existing actions in memory in case it belongs to an existing cluster.

Keywords

Plutchik model Takagi-Sugeno (TS) fuzzy model Potential calculation Cluster centers 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Fogassi, L., Ferrari, P., Gesierich, B., Rozzi, S., Chersi, F., Rizzolatti, G.: Parietal lobe: From action organization to intention understanding. Science 308, 662–667 (2005)CrossRefGoogle Scholar
  2. 2.
    Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G.: Action recognition in the premotor cortex. Brain 119, 593–609 (1996)CrossRefGoogle Scholar
  3. 3.
    Schaffler, L., Luders, H., Dinner, D., Lesser, R., Chelune, G.: Comprehension deficits elicited by electrical stimulation of broca’s area. Brain 116, 695–715 (1993)CrossRefGoogle Scholar
  4. 4.
    Gazzola, V., Keysers, C.: The observation and execution of actions share motor and somatosensory voxels in all tested subjects: Single-subject analyses of unsmoothed fmri data. Cerebral Cortex 19, 1239–1255 (2009)CrossRefGoogle Scholar
  5. 5.
    Iacoboni, M., Woods, R., Brass, M., Bekkering, H., Mazziotta, J., Rizzolatti, G.: Cortical mechanisms of human imitation. Science 286, 2526–2528 (1999)CrossRefGoogle Scholar
  6. 6.
    Ramachandran, V.: Mirror neurons and imitation learning as the driving force behind ”the great leap forward” in human evolution. Edge 69 (2000)Google Scholar
  7. 7.
    Rizzolatti, G., Fogassi, L., Gallese, V.: Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience 2, 661–670 (2001)CrossRefGoogle Scholar
  8. 8.
    Rizzolatti, G., Arbib, M.: Language within our grasp. Trends in Neurosciences 21, 188–194 (1998)CrossRefGoogle Scholar
  9. 9.
    Gallese, V., Goldman, A.: Mirror neurons and the simulation theory of mind reading. Trends in Cognitive Sciences 2, 493–500 (1998)CrossRefGoogle Scholar
  10. 10.
    Ramachandran, V., Oberman, L.: Broken mirrors: A theory of autism. Scientific American 295, 62–69 (2006)CrossRefGoogle Scholar
  11. 11.
    Ojemann, G., Ojemann, J., Lettich, E., Berger, M.: Cortical language localization in left, dominant hemisphere: An electrical stimulation mapping investigation in 117 patients. Neurosurgery 71, 316–326 (1989)CrossRefGoogle Scholar
  12. 12.
    Whiten, A., Ham, R.: On the nature and evolution of imitation in the animal kingdom: Reappraisal of a century of research. Advances in the Study of Behavior 21, 239–283 (1992)CrossRefGoogle Scholar
  13. 13.
    Whiten, A., Custance, D., Gomez, J., Teixidor, P., Bard, K.: Imitative learning of artificial fruit processing in children (homo sapiens) and chimpanzees (pan troglodytes). Comparative Psychology 110, 3–14 (1996)CrossRefGoogle Scholar
  14. 14.
    Whiten, A.: Imitation of sequential and hierarchical structure in action: Experimental studies with children and chimpanzees. In: Cambridge, M.P. (ed.) Imitation in Animals and Artifacts, MA, USA, pp. 191–209 (2002)Google Scholar
  15. 15.
    Tomasello, M., Davis-Dasilva, M., Camak, L., Bard, K.: Observational learning of tool use by young chimpanzees and enculturated chimpanzees. Human Evolution 2, 175–183 (1987)CrossRefGoogle Scholar
  16. 16.
    Tomasello, M.: Emulation learning and cultural learning. Behavior and Brain Science 21, 703–704 (1998)CrossRefGoogle Scholar
  17. 17.
    Wood, D.: Social interaction as tutoring. In: Bornsten, M.H., Bruner, J. (eds.) Interaction in Human Development, Hillsdale, NJ, USA, pp. 59–80 (1989)Google Scholar
  18. 18.
    Whiten, A.: The scope of culture in chimpanzees, humans and ancestral apes. Philosophical Transactions of the Royal Society 366, 935–1187 (2011)Google Scholar
  19. 19.
    Galef, B.: The question of animal culture. Human Nature 3, 157–178 (1992)CrossRefGoogle Scholar
  20. 20.
    Heyes, C.: Imitation, culture and cognition. Animal Behavior 46, 999–1010 (1993)CrossRefGoogle Scholar
  21. 21.
    Tomasello, M., Savage-Rumbaugh, E., Kruger, A.: Imitative learning of actions on objects by children, chimpanzees and enculturated chimpanzees. Child Development 64, 1688–1705 (1993)CrossRefGoogle Scholar
  22. 22.
    Buchsbaum, D., Griffiths, T., Gopnik, A., Baldwin, D.: Learning from actions and their consequences: Inferring causal variables from continuous sequences of human action. In: Proceedings of the 31st Annual Conference of the Cognitive Science Society (2009)Google Scholar
  23. 23.
    Buchsbaum, D., Canini, K., Griffiths, T.: Segmenting and recognizing human action using low-level video. In: Proceedings of the 33rd Annual Conference of the Cognitive Science Society (2011)Google Scholar
  24. 24.
    Tani, J.: Learning to generate articulated behavior through the bottom-up and the top-down interaction processes. Neural Networks 16, 11–23 (2003)CrossRefGoogle Scholar
  25. 25.
    Tani, J., Ito, M., Sugita, Y.: Self-organization of distributedly represented multiple behavior schemata in a mirror system: Reviews of robot experiments using rnnpb. Neural Networks 17, 1273–1289 (2004)CrossRefGoogle Scholar
  26. 26.
    Issar, S., Ward, W.: Cmu’s robust spoken language understanding system. In: Proceedings of the 3rd European Conference on Speech Communication and Technology, EUROSPEECH (1993)Google Scholar
  27. 27.
    Bennacef, S., Bonnea-Maynard, H., Gauvain, J., Lamel, L., Minker, W.: A spoken language system for information retrieval. In: Proceedings of the 3rd International Conference on Spoken Language Processing, ICSLP (1994)Google Scholar
  28. 28.
    Miller, S., Bobrow, R., Schwartz, R., Ingria, R.: Statistical language processing using hidden understanding models. In: Proceedings of the Human Language Technology Workshop, NJ, USA (1994)Google Scholar
  29. 29.
    Levin, E., Pieraccini, R.: Concept-based spontaneous speech understanding system. In: Proceedings of the 4th European Conference on Speech Communication and Technology, EUROSPEECH (1995)Google Scholar
  30. 30.
    Goldberg, E., Driedger, N., Kittredge, R.: Using natural language processing to produce weather forecasts. IEEE Intelligent Systems and their Applications 9, 45–53 (1994)Google Scholar
  31. 31.
    Busemann, S.: Ten years after: An update on tg/2 (and friends). In: Proceedings of the European Natural Language Generation Workshop (2005)Google Scholar
  32. 32.
    Mcroy, S., Channarukul, S., Ali, S.: An augmented template-based approach to text realization. Natural Language Engineering 9, 381–420 (2003)CrossRefGoogle Scholar
  33. 33.
    Bateman, A.: Enabling technology for multilingual natural language generation: The kmpl development. Natural Language Engineering 3, 15–55 (1997)CrossRefGoogle Scholar
  34. 34.
    Lavoie, B., Rambow, O.: A fast and portable realizer for text generation. In: Proceedings of the 5th Conference on Applied Natural-Language Processing, ANLP (1997)Google Scholar
  35. 35.
    Gergely, G.: What should a robot learn from an infant? mechanisms of action interpretation and observational learning in infancy. Connection Science 15, 191–209 (2003)CrossRefGoogle Scholar
  36. 36.
    Kozima, H., Nakagawa, C., Yano, H.: Emergence of imitation mediated by objects. In: Proceedings of the 2nd International Workshop on Epigenetic Robotics (2002)Google Scholar
  37. 37.
    Rudolph, M., Muhlig, M., Gienger, M., Bohme, H.: Learning the consequences of actions: Representing effects as feature changes. In: Proceedings of the International Symposium on Learning and Adaptive Behavior in Robotic System (2010)Google Scholar
  38. 38.
    Murray, I., Arnott, J.: Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion. Journal of the Acoustical Society of America 93, 1097–1108 (1993)CrossRefGoogle Scholar
  39. 39.
    Cahn, J.: Generating expression in synthesized speech. In Master’s thesis, MIT Media Lab, USA (1990)Google Scholar
  40. 40.
    Roy, D., Pentland, A.: Automatic spoken affect analysis and classification. In: Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition, Vermont, USA (1996)Google Scholar
  41. 41.
    Slaney, M., McRoberts, G.: Baby ears: A recognition system for affective vocalizations. In: Proceedings of the 1998 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Seattle, USA (1998)Google Scholar
  42. 42.
    Breazeal, C., Aryananda, L.: Recognition of affective communicative intent in robot-directed speech. Autonomous Robots Journal 12, 83–104 (2002)CrossRefzbMATHGoogle Scholar
  43. 43.
    Vogt, T., Andre, E.: Improving automatic emotion recognition from speech via gender differentiation. In: Proceedings of the Language Resources and Evaluation Conference, LREC 2006 (2006)Google Scholar
  44. 44.
    Voeffra, C.: Emotion-sensitive human-computer interaction (hci): State of the art. In: Seminar Emotion Recognition (2011), http://diuf.unifr.ch/main/diva/teaching/seminars/emotion-recognition
  45. 45.
    Pierre-Yves, O.: The production and recognition of emotions in speech: features and algorithms. Human-Computer Studies 59 (2003)Google Scholar
  46. 46.
    Jones, C., Deeming, A.: Affective human-robot interaction. In: Peter, C., Beale, R. (eds.) Affect and Emotion in Human-Computer Interaction: From Theory to Applications, pp. 175–185 (2008)Google Scholar
  47. 47.
    Zadeh, L.: Fuzzy sets. Information and Control 8, 338–353 (1965)CrossRefzbMATHMathSciNetGoogle Scholar
  48. 48.
    Zadeh, L.: Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics 3, 28–44 (1973)CrossRefzbMATHMathSciNetGoogle Scholar
  49. 49.
    Mamdani, E., Assilian, S.: An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies 7, 1–13 (1975)CrossRefzbMATHGoogle Scholar
  50. 50.
    Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. on Systemsm Man, and Cybernetics 15, 116–132 (1985)CrossRefzbMATHGoogle Scholar
  51. 51.
    Sugeno, M.: Industrial applications of fuzzy control. Elsevier Science Pub. Co. (1985)Google Scholar
  52. 52.
    Bezdek, J.: Pattern recognition with fuzzy objective function algorithms. Plenum Press, New York (1981)CrossRefzbMATHGoogle Scholar
  53. 53.
    Vapnik, V.: Statistical learning theory. In: Haykin, S. (ed.) Adaptive and Learning Systems. John Wiley and Sons (1998)Google Scholar
  54. 54.
    Dunn, J.: A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. Journal of Cybernetics 3, 32–57 (1973)CrossRefzbMATHMathSciNetGoogle Scholar
  55. 55.
    Gustafsson, D., Kessel, W.: Fuzzy clustering with a fuzzy covariance matrix. In: Proceedings of the IEEE CDC, San Diego, CA, USA, pp. 761–766 (1979)Google Scholar
  56. 56.
    Gath, I., Geva, A.: Unsupervised optimal fuzzy clustering. IEEE Trans. on Pattern Analysis and Machine Intelligence 11, 773–781 (1989)CrossRefGoogle Scholar
  57. 57.
    Yager, R., Filev, D.: Approximate clustering via the mountain method. In Technical Report MII 1305, Machine Intelligence Institute, Iona College, New Rochelle (1992)Google Scholar
  58. 58.
    Yager, R., Filev, D.: Learning of fuzzy rules by mountain clustering. In: Proceedings of SPIE Conference on Applications of Fuzzy Logic Technology, Boston, MA, pp. 246–254 (1993)Google Scholar
  59. 59.
    Chiu, S.: Fuzzy model identification based on cluster estimation. Journal of Intelligent and Fuzzy Systems 2, 267–278 (1994)Google Scholar
  60. 60.
    Searle, J.: Austin on locutionary and illocutionary acts. The Philosophical Review 77, 405–424 (1968)CrossRefGoogle Scholar
  61. 61.
    Searle, J.: Speech acts: An essay in the philosophy of language. Cambridge University Press (1969)Google Scholar
  62. 62.
    Goldberg, L.: An alternative description of personality: The big-five factor structure. Personality and Social Psychology 59, 1216–1229 (1990)CrossRefGoogle Scholar
  63. 63.
    Aly, A., Tapus, A.: A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI (2013)Google Scholar
  64. 64.
    Summers-Stay, D., Teo, C., Yang, Y., Fermuller, C., Aloimonos, Y.: Using a minimal action grammar for activity understanding in the real world. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS (2012)Google Scholar
  65. 65.
    Pastra, K., Aloimonos, Y.: The minimalist grammar of action. Philosophical Transactions B 367, 103–117 (2012)CrossRefGoogle Scholar
  66. 66.
    Izard, C.: Face of emotion. Appleton, New York (1971)Google Scholar
  67. 67.
    Plutchik, R.: The nature of emotions. University Press of America, Lanham (1991)Google Scholar
  68. 68.
    Ekman, P.: Emotion in the human face: Guidelines for research and an integration of findings. Pergamon Press, New York (1972)Google Scholar
  69. 69.
    Ekman, P., Friesen, W., Ellsworth, P.: What emotion categories or dimensions can observers judge from facial behavior? In: Ekman, P. (ed.) Emotion in the Human Face. Cambridge University Press, New York (1982)Google Scholar
  70. 70.
    Izard, C.: Human emotions. Plenum Press, New York (1977)CrossRefGoogle Scholar
  71. 71.
    Tomkins, S.: Affect theory. In: Scherer, K., Ekman, P. (eds.) Approaches to Emotion, pp. 163–195. Erlbaum, Hillsdale (1984)Google Scholar
  72. 72.
    Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20, 273–297 (1995)zbMATHGoogle Scholar
  73. 73.
    Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., Weiss, B.: A database of german emotional speech. In: Proc. of Interspeech, Germany (2005), http://database.syntheticspeech.de
  74. 74.
    Banse, R., Scherer, K.: Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology 70, 614–636 (1996)CrossRefGoogle Scholar
  75. 75.
    Montero, J., Gutierrez-Arriola, J., Palazuelos, S., Enriquez, E., Aguilera, S., Pardo, J.: Emotional speech synthesis: from speech database to tts. In: Proceedings of the International Conference on Spoken Language Processing 1998, pp. 923–925 (1998)Google Scholar
  76. 76.
    Talkin, D.: A robust algorithm for pitch tracking. In: Kleijn, W.B., Paliwal, K. (eds.) Speech Coding and Synthesis, pp. 497–518. Elsevier (1995)Google Scholar
  77. 77.
    Sondhi, M.: New methods of pitch extraction. IEEE Trans. Audio and Electroacoustics 16, 262–266 (1968)CrossRefGoogle Scholar
  78. 78.
    Rabiner, L., Atal, B., Sambur, M.: Lpc prediction error: Analysis of its variation with the position of the analysis frame. IEEE Trans. on Systems Man, and Cybernetics 25, 434–442 (1977)Google Scholar
  79. 79.
    Rong, J., Li, G., Chen, Y.P.: Acoustic feature selection for automatic emotion recognition from speech. Information Processing and Management 45, 315–328 (2008)CrossRefGoogle Scholar
  80. 80.
    Cristianini, N., Shawe-Taylor, J.: Introduction to support vector machines. Cambridge University Press (2000)Google Scholar
  81. 81.
    Platt, J.: Fast training of support vector machines using sequential mininal optimization. In Microsoft Research Technical Report MSR-TR-98-14 (1998)Google Scholar
  82. 82.
    Angelov, P.: Evolving rule-based models: A tool for design of flexible adaptive systems. STUDFUZZ, vol. 92. Springer, Heidelberg (2002)Google Scholar
  83. 83.
    Aly, A., Tapus, A.: Towards an online real time fuzzy modeling for human internal states detection. In: Proceedings of the 12th IEEE International Conference on Control, Automation, Robotics and Vision, ICARCV (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Robotics and Computer Vision LabENSTA ParisTechPalaiseauFrance

Personalised recommendations