Towards Emotion Recognition in Human Computer Interaction

Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 19)

Abstract

The recognition of human emotions by technical systems is regarded as a problem of pattern recognition. Here methods of machine learning are employed which require substantial amounts of ’emotionally labeled’ data, because model based approaches are not available. Problems of emotion recognition are discussed from this point of view, focusing on problems of data gathering and also touching upon modeling of emotions and machine learning aspects.

Keywords

Affective computing emotion recognition human computer interaction companion systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cannon, W.: The James-Lange theory of emotions: A critical examination and an alternative theory. The American Journal of Psychology 39(1/4), 106–124 (1927)CrossRefGoogle Scholar
  2. 2.
    Ekman, P., Friesen, W.: The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1(1), 49–98 (1969)Google Scholar
  3. 3.
    Russell, J.: A circumplex model of affect. Journal of Personality and Social Psychology 39(6), 1161 (1980)CrossRefGoogle Scholar
  4. 4.
    Ekman, P.: Facial expression and emotion. American Psychologist 48, 384–392 (1993)CrossRefGoogle Scholar
  5. 5.
    Picard, R.W.: Affective Computing. The MIT Press (1997)Google Scholar
  6. 6.
    Russell, J., Barrett, L.: Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology 76(5), 805 (1999)CrossRefGoogle Scholar
  7. 7.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.: Emotion recognition in human-computer interaction. IEE Signal Processing Magazine 18(1), 32–80 (2001)CrossRefGoogle Scholar
  8. 8.
    Scherer, K.: What are emotions? And how can they be measured? Social Science Information 44(4), 695–729 (2005)CrossRefGoogle Scholar
  9. 9.
    Ekman, P.: An argument for basic emotions. Cognition & Emotion 6(3-4), 169–200 (1992)CrossRefGoogle Scholar
  10. 10.
    Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology 14(4), 261–292 (1996)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Damasio, A.: Descartes’ error and the future of human life. Scientific American 271(4), 144–145 (1994)CrossRefGoogle Scholar
  12. 12.
    Dayan, P., Balleine, B.: Reward, motivation, and reinforcement learning. Neuron 36(2), 285–298 (2002)CrossRefGoogle Scholar
  13. 13.
    Taylor, J., Fragopanagos, N.: The interaction of attention and emotion. Journal of Neural Networks 18(4), 353–369 (2005)CrossRefGoogle Scholar
  14. 14.
    Fogassi, L., Ferrari, P.: Mirror neurons and the evolution of embodied language. Current Directions in Psychological Science 16(3), 136–141 (2007)CrossRefGoogle Scholar
  15. 15.
    Dayan, P., Huys, Q.: Serotonin in affective control. Annual Review of Neuroscience 32, 95–126 (2009)CrossRefGoogle Scholar
  16. 16.
    Niv, Y.: Reinforcement learning in the brain. Journal of Mathematical Psychology 53(3), 139–154 (2009)MathSciNetMATHCrossRefGoogle Scholar
  17. 17.
    Rizzolatti, G., Arbib, M.: Language within our grasp. Journal of Trends in Neurosciences 21(5), 188–194 (1998)CrossRefGoogle Scholar
  18. 18.
    Rizzolatti, G., Fogassi, L., Gallese, V.: Mirrors in the mind. Scientific American 295(5), 54–61 (2006)CrossRefGoogle Scholar
  19. 19.
    Rizzolatti, G., Sinigaglia, C.: Mirrors in the brain: How our minds share actions, emotions. Oxford University Press (2008)Google Scholar
  20. 20.
    Arbib, M.: Beyond the Mirror: Biology and Culture in the Evolution of Brain and Language. Oxford University Press (2005)Google Scholar
  21. 21.
    Arbib, M.: Action to language via the mirror neuron system. Cambridge University Press (2006)Google Scholar
  22. 22.
    Bonaiuto, J., Rosta, E., Arbib, M., et al.: Extending the mirror neuron system model, I. Audible actions and invisible grasps. Journal of Biological Cybernetics 96(1), 9 (2007)MathSciNetMATHCrossRefGoogle Scholar
  23. 23.
    Bonaiuto, J., Arbib, M.: Extending the mirror neuron system model, II: What did i just do? A new role for mirror neurons. Journal of Biological Cybernetics 102(4), 341–359 (2010)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Gallese, V., Goldman, A.: Mirror neurons and the simulation theory of mind-reading. Journal of Trends in Cognitive Sciences 2(12), 493–501 (1998)CrossRefGoogle Scholar
  25. 25.
    Albus, J.: Outline for a theory of intelligence. IEEE Transactions on Systems, Man and Cybernetics 21(3), 473–509 (1991)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Bertsekas, D., Tsitsiklis, J.: Neuro-dynamic programming. Journal of Optimization and Neural Computation 3 (1996)Google Scholar
  27. 27.
    Sutton, R., Barto, A.: Reinforcement learning: An introduction, vol. 1. Cambridge University Press (1998)Google Scholar
  28. 28.
    Wörgötter, F.: Actor-critic models of animal control — A critique of reinforcement learning. In: Proceedings of International Symposium on Engineering of Intelligent Systems, EIS (2004)Google Scholar
  29. 29.
    Wörgötter, F., Porr, B.: Temporal sequence learning, prediction, and control: A review of different models and their relation to biological mechanisms. Journal of Neural Computation 17(2), 245–319 (2005)CrossRefGoogle Scholar
  30. 30.
    Oudeyer, P., Kaplan, F., Hafner, V.: Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation 11(2), 265–286 (2007)CrossRefGoogle Scholar
  31. 31.
    Izhikevich, E.: Solving the distal reward problem through linkage of STDP and dopamine signaling. Journal of Cerebral Cortex 17(10), 2443–2452 (2007)CrossRefGoogle Scholar
  32. 32.
    Botvinick, M., Niv, Y., Barto, A.: Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Journal of Cognition 113(3), 262–280 (2009)CrossRefGoogle Scholar
  33. 33.
    Bhatnagar, S., Sutton, R., Ghavamzadeh, M., Lee, M.: Natural actor–critic algorithms. Automatica 45(11), 2471–2482 (2009)MathSciNetMATHCrossRefGoogle Scholar
  34. 34.
    Huys, Q., Dayan, P.: A Bayesian formulation of behavioral control. Journal of Cognition 113(3), 314–328 (2009)CrossRefGoogle Scholar
  35. 35.
    Bartl, C., Dörner, D.: PSI: A theory of the integration of cognition, emotion and motivation. In: Proceedings of the European Conference on Cognitive Modelling (ECCM), DTIC Document, pp. 66–73 (1998)Google Scholar
  36. 36.
    Dörner, D.: Bauplan fur eine Seele (Blueprint of a Soul). RoRoRo (1999)Google Scholar
  37. 37.
    Hille, K.: Synthesizing emotional behavior in a simple animated character. Journal of Artificial Life 7(3), 303–313 (2001)CrossRefGoogle Scholar
  38. 38.
    Bach, J.: Principles of synthetic intelligence — PSI: an architecture of motivated cognition. Oxford University Press (2008)Google Scholar
  39. 39.
    Ortony, A., Clore, G., Collins, A.: The cognitive structure of emotions. Cambridge University Press (1988)Google Scholar
  40. 40.
    Tomasello, M., Carpenter, M., Call, J., Behne, T., Moll, H., et al.: Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28(5), 675–690 (2005)Google Scholar
  41. 41.
    Marsella, S., Gratch, J.: EMA: A process model of appraisal dynamics. Journal of Cognitive Systems Research 10(1), 70–90 (2009)CrossRefGoogle Scholar
  42. 42.
    Wendemuth, A., Biundo, S.: A Companion Technology for Cognitive Technical Systems. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V. (eds.) Cognitive Behavioural Systems 2011. LNCS, vol. 7403, pp. 89–103. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  43. 43.
    Scherer, K., Ceschi, G.: Criteria for emotion recognition from verbal and nonverbal expression: studying baggage loss in the airport. Personality and Social Psychology Bulletin 26(3), 327–339 (2000)CrossRefGoogle Scholar
  44. 44.
    Batliner, A., Fischer, K., Huber, R., Spilker, J., Nöth, E.: How to find trouble in communication. Journal of Speech Communication 40(1), 117–143 (2003)MATHCrossRefGoogle Scholar
  45. 45.
    Batliner, A., Zeißler, V., Frank, C., Adelhardt, J., Shi, R., Nöth, E.: We are not amused-but how do you know? User states in a multi-modal dialogue system. In: Proceedings of the European Conference on Speech Communication and Technology, Eurospeech (2003)Google Scholar
  46. 46.
    Batliner, A., Hacker, C., Steidl, S., Nöth, E., Haas, J.: From Emotion to Interaction: Lessons from Real Human-Machine-Dialogues. In: André, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 1–12. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  47. 47.
    Esposito, A.: The amount of information on emotional states conveyed by the verbal and nonverbal channels: Some perceptual data. Journal of Progress in Nonlinear Speech Processing, 249–268 (2007)Google Scholar
  48. 48.
    Schuller, B., Seppi, D., Batliner, A., Maier, A., Steidl, S.: Towards more reality in the recognition of emotional speech. In: Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 4, pp. 941–944. IEEE (2007)Google Scholar
  49. 49.
    Wendt, C., Popp, M., Karg, M., Kuhnlenz, K.: Physiology and HRI: Recognition of over-and underchallenge. In: Proceedings of the Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 448–452. IEEE (2008)Google Scholar
  50. 50.
    Batliner, A., Steidl, S., Schuller, B., Seppi, D., Vogt, T., Wagner, J., Devillers, L., Vidrascu, L., Aharonson, V., Kessous, L., et al.: Whodunnit-searching for the most important feature types signalling emotion-related user states in speech. Journal of Computer Speech & Language 25(1), 4–28 (2011)CrossRefGoogle Scholar
  51. 51.
    Tian, Y., Kanade, T., Cohn, J.: Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(2), 97–115 (2001)CrossRefGoogle Scholar
  52. 52.
    Lee, C., Yildirim, S., Bulut, M., Kazemzadeh, A., Busso, C., Deng, Z., Lee, S., Narayanan, S.: Emotion recognition based on phoneme classes. In: Proceedings of the Annual Conference of the International Speech Communication Association (ISCA), Interspeech (2004)Google Scholar
  53. 53.
    Wagner, J., Vogt, T., André, E.: A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 114–125. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  54. 54.
    Vlasenko, B., Schuller, B., Wendemuth, A., Rigoll, G.: Frame vs. Turn-Level: Emotion Recognition from Speech Considering Static and Dynamic Processing. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 139–147. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  55. 55.
    Maganti, H.K., Scherer, S., Palm, G.: A Novel Feature for Emotion Recognition in Voice Based Applications. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 710–711. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  56. 56.
    Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the International Conference on Automatic Face and Gesture Recognition (FG), pp. 46–53. IEEE (2000)Google Scholar
  57. 57.
    Wendt, B., Scheich, H.: The “Magdeburger Prosodie-Korpus”. In: Proceedings of the International Conference on Speech Prosody, pp. 699–701 (2002)Google Scholar
  58. 58.
    Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., Weiss, B.: A database of German emotional speech. In: Proceedings of the European Conference on Speech Communication and Technology, Eurospeech (2005)Google Scholar
  59. 59.
    Bänziger, T., Scherer, K.R.: Using Actor Portrayals to Systematically Study Multimodal Emotion Expression: The GEMEP Corpus. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 476–487. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  60. 60.
    Douglas-Cowie, E., Campbell, N., Cowie, R., Roach, P.: Emotional speech: Towards a new generation of databases. Journal of Speech Communication 40(1), 33–60 (2003)MATHCrossRefGoogle Scholar
  61. 61.
    Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., McRorie, M., Martin, J.-C., Devillers, L., Abrilian, S., Batliner, A., Amir, N., Karpouzis, K.: The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 488–500. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  62. 62.
    Campbell, N.: Databases of expressive speech. Journal of Chinese Language and Computing 14(4), 295–304 (2004)Google Scholar
  63. 63.
    Campbell, N.: Listening between the lines: A study of paralinguistic information carried by tone-of-voice. In: Proceedings of the International Symposium on Tonal Aspects of Languages: With Emphasis on Tone Languages (2004)Google Scholar
  64. 64.
    Campbell, N., Sadanobu, T., Imura, M., Iwahashi, N., Noriko, S., Douxchamps, D.: A multimedia database of meetings and informal interactions for tracking participant involvement and discourse flow. In: Proceedings of the International Conference on Language Resources and Evaluation, LREC (2006)Google Scholar
  65. 65.
    Cowie, R., Douglas-Cowie, E., Cox, C.: Beyond emotion archetypes: Databases for emotion modelling using neural networks. Journal of Neural Networks 18(4), 371–388 (2005)CrossRefGoogle Scholar
  66. 66.
    Martin, O., Kotsia, I., Macq, B., Pitas, I.: The eNTERFACE’05 audio-visual emotion database. In: Proceedings of the International Conference on Data Engineering Workshops (ICDE), p. 8. IEEE (2006)Google Scholar
  67. 67.
    Grimm, M., Kroschel, K., Narayanan, S.: The Vera am Mittag German audio-visual emotional speech database. In: Proceedings of the International Conference on Multimedia and Expo (ICME), pp. 865–868. IEEE (2008)Google Scholar
  68. 68.
    Strauß, P.M., Hoffmann, H., Minker, W., Neumann, H., Palm, G., Scherer, S., Traue, H., Weidenbacher, U.: The PIT corpus of german multi-party dialogues. In: Proceedings of the International Conference on Language Resources and Evaluation, LREC (2008)Google Scholar
  69. 69.
    McKeown, G., Valstar, M., Cowie, R., Pantic, M.: The SEMAINE corpus of emotionally coloured character interactions. In: Proceedings of the International Conference on Multimedia and Expo (ICME), pp. 1079–1084. IEEE (2010)Google Scholar
  70. 70.
    Oertel, C., Scherer, S., Wagner, P., Campbell, N.: On the use of multimodal cues for the prediction of involvement in spontaneous conversation. In: Proceedings of the Annual Conference of the International Speech Communication Association (ISCA), Interspeech, pp. 1541–1544. ISCA (2011)Google Scholar
  71. 71.
    Caridakis, G., Castellano, G., Kessous, L., Raouzaiou, A., Malatesta, L., Asteriadis, S., Karpouzis, K.: Multimodal Emotion Recognition from Expressive Faces, Body Gestures and Speech. In: Boukis, C., Pnevmatikakis, L., Polymenakos, L. (eds.) Artificial Intelligence and Innovations 2007: from Theory to Applications. IFIP, vol. 247, pp. 375–388. Springer, Boston (2007)CrossRefGoogle Scholar
  72. 72.
    Batliner, A., Steidl, S., Hacker, C., Nöth, E.: Private emotions versus social interaction: A data-driven approach towards analysing emotion in speech. Journal of User Modeling and User-Adapted Interaction 18(1), 175–206 (2008)CrossRefGoogle Scholar
  73. 73.
    Zeng, Z., Pantic, M., Roisman, G., Huang, T.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(1), 39–58 (2009)CrossRefGoogle Scholar
  74. 74.
    Glodek, M., Tschechne, S., Layher, G., Schels, M., Brosch, T., Scherer, S., Kächele, M., Schmidt, M., Neumann, H., Palm, G., Schwenker, F.: Multiple Classifier Systems for the Classification of Audio-Visual Emotional States. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part II. LNCS, vol. 6975, pp. 359–368. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  75. 75.
    Walter, S., Scherer, S., Schels, M., Glodek, M., Hrabal, D., Schmidt, M., Böck, R., Limbrecht, K., Traue, H., Schwenker, F.: Multimodal Emotion Classification in Naturalistic User Behavior. In: Jacko, J.A. (ed.) HCI International 2011, Part III. LNCS, vol. 6763, pp. 603–611. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  76. 76.
    Schels, M., Glodek, M., Meudt, S., Schmidt, M., Hrabal, D., Böck, R., Walter, S., Schwenker, F.: Multi-modal classifier-fusion for the classification of emotional states in WOZ scenarios. In: Proceeding of the International Conference on Affective and Pleasurable Design, APD (2012)Google Scholar
  77. 77.
    Scherer, S., Glodek, M., Schwenker, F., Campbell, N., Palm, G.: Spotting laughter in natural multiparty conversations: A comparison of automatic online and offline approaches using audiovisual data. ACM Transactions on Interactive Intelligent System 2(1), 4:1–4:31 (2012)Google Scholar
  78. 78.
    Schuller, B., Valstar, M., Eyben, F., McKeown, G., Cowie, R., Pantic, M.: AVEC 2011–The First International Audio/Visual Emotion Challenge. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part II. LNCS, vol. 6975, pp. 415–424. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  79. 79.
    Douglas-Cowie, E., Cox, C., Martin, J., Devillers, L., Cowie, R., Sneddon, I., McRorie, M., Pelachaud, C., Peters, C., Lowry, O., et al.: The HUMAINE database. In: Emotion-Oriented Systems: The Humaine Handbook, pp. 243–284. Springer (2011)Google Scholar
  80. 80.
    Frommer, J., Michaelis, B., Rösner, D., Wendemuth, A., Friesen, R., Haase, M., Kunze, M., Andrich, R., Lange, J., Panning, A., Siegert., I.: Towards emotion and affect detection in the multimodal LAST MINUTE corpus. In: Proceedings of the International Conference on Language Resources and Evaluation, LREC (2012)Google Scholar
  81. 81.
    Gnjatovic, M., Rösner, D.: On the role of the Nimitek corpus in developing an emotion adaptive spoken dialogue system. In: Proceedings of the International Conference on Language Resources and Evaluation Conference, LREC (2008)Google Scholar
  82. 82.
    Strauß, P.M., Hoffmann, H., Minker, W., Neumann, H., Palm, G., Scherer, S., Traue, H., Weidenbacher, U.: The PIT corpus of german multi-party dialogues. In: Proceedings of the International Conference on Language Resources and Evaluation, LREC (2008)Google Scholar
  83. 83.
    Douglas-Cowie, E., Cowie, R., Cox, C., Amir, N., Heylen, D.: The sensitive artificial listener: An induction technique for generating emotionally coloured conversation. In: Proceedings of the Workshop Corpora for Research on Emotion and Affect at the International Conference on Language Resources and Evaluation, LREC (2008)Google Scholar
  84. 84.
    Türk, U.: The technical processing in Smartkom data collection: A case study. In: Proceedings of the European Conference on Speech Communication and Technology, Eurospeech (2001)Google Scholar
  85. 85.
    Scherer, S., Siegert, I., Bigalke, L., Meudt, S.: Developing an expressive speech labeling tool incorporating the temporal characteristics of emotion. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC). European Language Resources Association, ELRA (2010)Google Scholar
  86. 86.
    Böck, R., Siegert, I., Haase, M., Lange, J., Wendemuth, A.: ikannotate – A Tool for Labelling, Transcription, and Annotation of Emotionally Coloured Speech. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 25–34. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  87. 87.
    Siegert, I., Böck, R., Philippou-Hübner, D., Vlasenko, B., Wendemuth, A.: Appropriate emotional labeling of non-acted speech using basic emotions, Geneva emotion wheel and self assessment manikins. In: Proceedings of the International Conference on Multimedia and Expo (ICME). IEEE (2011)Google Scholar
  88. 88.
    Dietrich, C., Schwenker, F., Palm, G.: Classification of time series utilizing temporal and decision fusion. Journal of Multiple Classifier Systems, 378–387 (2001)Google Scholar
  89. 89.
    Dietrich, C., Palm, G., Schwenker, F.: Decision templates for the classification of bioacoustic time series. Journal of Information Fusion 4(2), 101–109 (2003)CrossRefGoogle Scholar
  90. 90.
    Dietrich, C., Palm, G., Riede, K., Schwenker, F.: Classification of bioacoustic time series based on the combination of global and local decisions. Journal of Pattern Recognition 37(12), 2293–2305 (2004)Google Scholar
  91. 91.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. Journal of Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  92. 92.
    Kahsay, L., Schwenker, F., Palm, G.: Comparison of multiclass SVM decomposition schemes for visual object recognition. Journal of Pattern Recognition, 334–341 (2005)Google Scholar
  93. 93.
    Mayer, G., Utz, H., Palm, G.: Information integration in a multi-stage object classifier. Journal of Autonome Mobile Systeme, 211–217 (2005)Google Scholar
  94. 94.
    Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N., Johnston, S.J.: Naive random subspace ensemble with linear classifiers for real-time classification of fMRI data. Journal of Pattern Recognition 45(6), 2101–2108 (2012)CrossRefGoogle Scholar
  95. 95.
    Schwenker, F., Kestler, H., Palm, G.: Three learning phases for radial-basis-function networks. Journal of Neural Networks 14(4-5), 439–458 (2001)CrossRefGoogle Scholar
  96. 96.
    Scherer, S., Schwenker, F., Palm, G.: Classifier fusion for emotion recognition from speech. In: Proceedings of the International Conference on Intelligent Environments (IE), pp. 152–155 (2007)Google Scholar
  97. 97.
    Schels, M., Thiel, C., Schwenker, F., Palm, G.: Classifier Fusion Applied to Facial Expression Recognition: An Experimental Comparison. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Human Centered Robot Systems. Cognitive Systems Monographs, vol. 6, pp. 121–129. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  98. 98.
    Scherer, S., Schwenker, F., Palm, G.: Classifier fusion for emotion recognition from speech. In: Proceeding of the International Conference Intelligent Environments (IE), pp. 95–117. Springer (2009)Google Scholar
  99. 99.
    Scherer, S., Trentin, E., Schwenker, F., Palm, G.: Approaching emotion in human computer interaction. In: Proceedings of the International Workshop on Spoken Dialogue Systems (IWSDS), pp. 156–168 (2009)Google Scholar
  100. 100.
    Scherer, S., Schwenker, F., Campbell, N., Palm, G.: Multimodal Laughter Detection in Natural Discourses. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Human Centered Robot Systems. Cognitive Systems Monographs, vol. 6, pp. 111–120. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  101. 101.
    Schuller, B., Vlasenko, B., Eyben, F., Wöllmer, M., Stuhlsatz, A., Wendemuth, A., Rigoll, G.: Cross-corpus acoustic emotion recognition: Variances and strategies. IEEE Transactions on Affective Computing 1(2), 119–131 (2010)CrossRefGoogle Scholar
  102. 102.
    Wöllmer, M., Metallinou, A., Eyben, F., Schuller, B., Narayanan, S.: Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional lstm modeling. In: Proceedings of the Annual Conference of the International Speech Communication Association (ISCA), Interspeech, pp. 2362–2365 (2010)Google Scholar
  103. 103.
    Esparza, J., Scherer, S., Schwenker, F.: Studying Self- and Active-Training Methods for Multi-feature Set Emotion Recognition. In: Schwenker, F., Trentin, E. (eds.) PSL 2011. LNCS, vol. 7081, pp. 19–31. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  104. 104.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the Annual Conference on Computational Learning Theory (COLT), pp. 92–100. Morgan Kaufmann (1998)Google Scholar
  105. 105.
    Bennet, K., Demiriz, A., Maclin, R.: Exploiting unlabeled data in ensemble methods. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pp. 289–296 (2002)Google Scholar
  106. 106.
    Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. Journal of Advances in Neural Information Processing Systems 17, 529–536 (2005)Google Scholar
  107. 107.
    Lawrence, N., Jordan, M.: Semi-supervised learning via Gaussian processes. Journal of Advances in Neural Information Processing Systems 17, 753–760 (2005)Google Scholar
  108. 108.
    Schwenker, F., Dietrich, C., Thiel, C., Palm, G.: Learning of decision fusion mappings for pattern recognition. Proceedings of the International Journal on Artificial Intelligence and Machine Learning (AIML) 6, 17–21 (2006)Google Scholar
  109. 109.
    Thiel, C., Scherer, S., Schwenker, F.: Fuzzy-Input Fuzzy-Output One-Against-All Support Vector Machines. In: Apolloni, B., Howlett, R.J., Jain, L. (eds.) KES 2007, Part III. LNCS (LNAI), vol. 4694, pp. 156–165. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  110. 110.
    Wang, W., Zhou, Z.-H.: Analyzing Co-training Style Algorithms. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 454–465. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  111. 111.
    Abdel Hady, M.F., Schwenker, F.: Decision Templates Based RBF Network for Tree-Structured Multiple Classifier Fusion. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 92–101. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  112. 112.
    Abdel Hady, M.F., Schels, M., Schwenker, F., Palm, G.: Semi-supervised Facial Expressions Annotation Using Co-Training with Fast Probabilistic Tri-Class SVMs. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010, Part II. LNCS, vol. 6353, pp. 70–75. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  113. 113.
    Abdel Hady, M.F., Schwenker, F., Palm, G.: Semi-supervised learning for tree-structured ensembles of RBF networks with co-training. Neural Networks 23(4), 497–509 (2010)CrossRefGoogle Scholar
  114. 114.
    Abdel Hady, M.F., Schwenker, F., Palm, G.: When classifier selection meets information theory: A unifying view. In: Proceedings of the International Conference on Soft Computing and Pattern Recognition (SoCPaR), pp. 314–319. IEEE (2010)Google Scholar
  115. 115.
    Abdel Hady, M., Schwenker, F.: Combining committee-based semi-supervised learning and active learning. Journal of Computer Science and Technology (JCST): Special Issue on Advances in Machine Learning and Applications 25(4), 681–698 (2010)MathSciNetGoogle Scholar
  116. 116.
    Abdel Hady, M., Schwenker, F., Palm, G.: Semi-supervised learning for tree-structured ensembles of RBF networks with co-training. Journal of Neural Networks 23(4), 497–509 (2010)CrossRefGoogle Scholar
  117. 117.
    Zhu, X.: Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin at Madison (2008)Google Scholar
  118. 118.
    Settles, B.: Active learning literature survey. Technical Report 1648, Department of Computer Sciences, University of Wisconsin-Madison (2009)Google Scholar
  119. 119.
    El Gayar, N., Schwenker, F., Palm, G.: A study of the robustness of KNN classifiers trained using soft labels. Journal of Artificial Neural Networks in Pattern Recognition, 67–80 (2006)Google Scholar
  120. 120.
    Thiel, C., Sonntag, B., Schwenker, F.: Experiments with Supervised Fuzzy LVQ. In: Prevost, L., Marinai, S., Schwenker, F. (eds.) ANNPR 2008. LNCS (LNAI), vol. 5064, pp. 125–132. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  121. 121.
    Thiel, C., Giacco, F., Schwenker, F., Palm, G.: Comparison of neural classification algorithms applied to land cover mapping. In: Proceeding of the International Conference on New Directions in Neural Networks: Italian Workshop on Neural Networks: (WIRN), pp. 254–263. IOS Press (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Domenico LabateMECMAT - Mediterranea University of Reggio CalabriaReggio CalabriaItaly
  2. 2.Institute of Neural Information ProcessingUniversity of UlmUlmGermany

Personalised recommendations