Skip to main content
  • 1207 Accesses

Abstract

Before we use the concept of “affect” in human-computer interaction (HCI), research on emotions has been done for a long time, a history of which has been detailed in the book Handbook of Emotions (Lewis & Haviland-Jones, 2000). Emotion is a positive or negative mental state that combines physiological input with cognitive appraisal (Oatley, 1987; Ortony et al., 1990; Thagard, 2005). Although not traditionally considered an aspect of cognitive science, it has recently been attributed to be effective on rational decision making. Predominant theories about emotion explain it as either making judgments, or having bodily reactions, or the combination of the two. Judgments are made (such as satisfaction from the outcome of hard work) and/or bodily reactions (such as sweating from fear of a task, or nervousness) take place based on a person's interactions or disposition.

Emotional communication is important to understanding social emotional influences in the workplace. Nowadays, more and more researchers are interested in how to integrate emotions into HCI, which has become known as “affective computing” (Picard, 1997). Affective computing builds an “affect model” based on a variety of information, which results in a personalized computing system with the capability of perception and interpretation of human feelings as well as generating intelligent, sensitive, and friendly responses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aggarwal, J. K., & Cai, Q. (1999). Human motion analysis: A review, Computer Vision and Image Understanding, 73(3), 428–440.

    Article  Google Scholar 

  2. Antonio, R., & Damasio, H. (1992). Brain and language, Scientific American, September, 89–95.

    Google Scholar 

  3. Blaney, P. H. (1986). Affect and memory: A review. Psychological Bulletin, 99(2), 229–246.

    Article  Google Scholar 

  4. Bregler, C., Covell, M., & Slaney, M. (1997). Video rewrite: Driving visual speech with audio. ACM SIGGRAPH'97, 353–360.

    Google Scholar 

  5. Cahn, J. E. (1990). The generation of affect in synthesized speech. Journal of the American Voice I/O Society, 8, 1–19.

    Google Scholar 

  6. Campbell, N. (2004). Perception of affect in speech – Towards an automatic processing of paralin-guistic information in spoken conversation. In ICSLP2004, Jeju (pp. 881–884).

    Google Scholar 

  7. Cowie, R. (2001). Emotion recognition in human–computer interaction. IEEE Signal Processing Magazine, 18(1), 32–80.

    Article  Google Scholar 

  8. Damasio, A. R. (1994). Error: Emotion, reason, and the human brain. New York: Gosset/Putnam Press.

    Google Scholar 

  9. Eide, E., Aaron, A., & Bakis, R., et al. (2002). A corpus-based approach to < ahem/ >expressive speech synthesis. In IEEE Speech Synthesis Workshop, Santa Monica (pp. 79–84).

    Google Scholar 

  10. Ekman, P. (1999). Basic emotions. Handbook of cognition and emotion. New York: John Wiley.

    Google Scholar 

  11. Ekman, P., & Friesen, W. V. (1997). Manual for the facial action coding system. Palo Alto, CA: Consulting Psychologists Press.

    Google Scholar 

  12. Etcoff, N. L., & Magee J. J. (1992). Categorical perception of facial expressions. Cognition, 44, 227–240.

    Article  Google Scholar 

  13. Gavrila, D. M. (1999). The visual analysis of human movement: A survey. Computer Vision and Image Understanding, 73(1) January. 82–98.

    Article  MATH  Google Scholar 

  14. Gobl, C., & Chasaide, A. N. (2003). The role of voice quality in communicating emotion, mood and attitude. Speech Communication, 40, 189–212.

    Article  MATH  Google Scholar 

  15. Goleman, D. (1998). Working with emotional intelligence, New York: Bantam Books.

    Google Scholar 

  16. Gutierrez-Osuna, R., Kakumanu, P.K., Esposito, A., Garcia, O.N., Bojorquez, A., Castillo, J.L., & Rudomin, I. (2005). Speech-driven facial animation with realistic dynamics. IEEE Transactions on Multimedia, 7(1), 33–42.

    Article  Google Scholar 

  17. Hong, P. Y., Wen, Z., & Huang, T. S. (2002). Real-time speech-driven face animation with expressions using neural networks. IEEE Transactions on Neural Networks, 13(4), 916–927.

    Article  Google Scholar 

  18. James, W. (1884). What is emotion? Mind, 9, 188–205.

    Article  Google Scholar 

  19. Lewis, M., & Haviland-Jones, J. M. (2000). Handbook of emotions. New York: Guilford Press.

    Google Scholar 

  20. Massaro, D. W., Beskow, J., Cohen, M. M., Fry, C. L., & Rodriguez, T. (1999). Picture my voice: Audio to visual speech synthesis using artificial neural networks. In AV S P ' 9 9, Santa Cruz, CA (pp. 133–138).

    Google Scholar 

  21. Moriyama, T., & Ozawa, S. (1999). Emotion recognition and synthesis system on speech. In IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, Vol. 1, 840–844.

    Chapter  Google Scholar 

  22. Mozziconacci, S. J. L., & Hermes, D. J. (2000). Expression of emotion and attitude through temporal speech variations. In 6th International Conference on Spoken Language Processing, IC-SLP2000, Beijing.

    Google Scholar 

  23. Oatley, K. (1987). Cognitive science and the understanding of emotions. Cognition and Emotion, 3(1), 209–216.

    Google Scholar 

  24. Ortony, A., Clore, G. L., & Collins A. (1990). The cognitive structure of emotions, Cambridge, UK: Cambridge University Press.

    Google Scholar 

  25. Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurements of meaning, Champaign, IL: University of Illinois Press.

    Google Scholar 

  26. Pavlovic, V. I., Sharma, R., & Huang T. S. (1997). Visual interpretation of hand gestures for human—computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 677–695.

    Article  Google Scholar 

  27. Petrushin, V. A. (2000). Emotion recognition in speech signal: Experimental study, development and application. In 6th International Conference on Spoken Language Processing, ICSLP2000, Beijing (pp. 222–225).

    Google Scholar 

  28. Picard, R. W. (1997). Affective computing. Cambridge, MA: MIT Press.

    Google Scholar 

  29. Picard, R. W. (2003). Affective computing: Challenges. International Journal of Human—Computer Studies, 59(1–2), 55–64.

    Article  Google Scholar 

  30. Rapaport, D. (1961). Emotions and memory. New York: Science Editions.

    Google Scholar 

  31. Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99, 143–165.

    Article  Google Scholar 

  32. Schlossberg, H. (1954). Three dimensions of emotion. Psychological review, 61, 81–88.

    Article  Google Scholar 

  33. Schröder, M. (2001). Emotional speech synthesis: A review. In Eurospeech 2001, Aalborg, Denmark (pp. 561–564).

    Google Scholar 

  34. Tao, J., & Tan, T. (2005). Affective computing: A review. In ACII 2005, 981–995.

    Google Scholar 

  35. Tato, R., Santos, R., Kompe, R., & Pardo, J. M. (2002). Emotional space improves emotion recognition. In ICSLP2002, Denver (pp. 2029–2032).

    Google Scholar 

  36. Thagard, P. (2005). MIND: Introduction to cognitive science, Cambridge, MA: MIT Press.

    Google Scholar 

  37. Verma, A., Subramaniam, L. V., & Rajput N., et al. (2004). Animating expressive faces across languages. IEEE Transactions on Multimedia, 6(6), 791– 800.

    Article  Google Scholar 

  38. Yamamoto, E., Nakamura, S., and Shikano K. (1998). Lip movement synthesis from speech based on hidden Markov models. Speech Communication, 26, 105–115.

    Article  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this chapter

Cite this chapter

Tao, J., Tan, T. (2009). Introduction. In: Tao, J., Tan, T. (eds) Affective Information Processing. Springer, London. https://doi.org/10.1007/978-1-84800-306-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-84800-306-4_1

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84800-305-7

  • Online ISBN: 978-1-84800-306-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics