Abstract
Before we use the concept of “affect” in human-computer interaction (HCI), research on emotions has been done for a long time, a history of which has been detailed in the book Handbook of Emotions (Lewis & Haviland-Jones, 2000). Emotion is a positive or negative mental state that combines physiological input with cognitive appraisal (Oatley, 1987; Ortony et al., 1990; Thagard, 2005). Although not traditionally considered an aspect of cognitive science, it has recently been attributed to be effective on rational decision making. Predominant theories about emotion explain it as either making judgments, or having bodily reactions, or the combination of the two. Judgments are made (such as satisfaction from the outcome of hard work) and/or bodily reactions (such as sweating from fear of a task, or nervousness) take place based on a person's interactions or disposition.
Emotional communication is important to understanding social emotional influences in the workplace. Nowadays, more and more researchers are interested in how to integrate emotions into HCI, which has become known as “affective computing” (Picard, 1997). Affective computing builds an “affect model” based on a variety of information, which results in a personalized computing system with the capability of perception and interpretation of human feelings as well as generating intelligent, sensitive, and friendly responses.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aggarwal, J. K., & Cai, Q. (1999). Human motion analysis: A review, Computer Vision and Image Understanding, 73(3), 428–440.
Antonio, R., & Damasio, H. (1992). Brain and language, Scientific American, September, 89–95.
Blaney, P. H. (1986). Affect and memory: A review. Psychological Bulletin, 99(2), 229–246.
Bregler, C., Covell, M., & Slaney, M. (1997). Video rewrite: Driving visual speech with audio. ACM SIGGRAPH'97, 353–360.
Cahn, J. E. (1990). The generation of affect in synthesized speech. Journal of the American Voice I/O Society, 8, 1–19.
Campbell, N. (2004). Perception of affect in speech – Towards an automatic processing of paralin-guistic information in spoken conversation. In ICSLP2004, Jeju (pp. 881–884).
Cowie, R. (2001). Emotion recognition in human–computer interaction. IEEE Signal Processing Magazine, 18(1), 32–80.
Damasio, A. R. (1994). Error: Emotion, reason, and the human brain. New York: Gosset/Putnam Press.
Eide, E., Aaron, A., & Bakis, R., et al. (2002). A corpus-based approach to < ahem/ >expressive speech synthesis. In IEEE Speech Synthesis Workshop, Santa Monica (pp. 79–84).
Ekman, P. (1999). Basic emotions. Handbook of cognition and emotion. New York: John Wiley.
Ekman, P., & Friesen, W. V. (1997). Manual for the facial action coding system. Palo Alto, CA: Consulting Psychologists Press.
Etcoff, N. L., & Magee J. J. (1992). Categorical perception of facial expressions. Cognition, 44, 227–240.
Gavrila, D. M. (1999). The visual analysis of human movement: A survey. Computer Vision and Image Understanding, 73(1) January. 82–98.
Gobl, C., & Chasaide, A. N. (2003). The role of voice quality in communicating emotion, mood and attitude. Speech Communication, 40, 189–212.
Goleman, D. (1998). Working with emotional intelligence, New York: Bantam Books.
Gutierrez-Osuna, R., Kakumanu, P.K., Esposito, A., Garcia, O.N., Bojorquez, A., Castillo, J.L., & Rudomin, I. (2005). Speech-driven facial animation with realistic dynamics. IEEE Transactions on Multimedia, 7(1), 33–42.
Hong, P. Y., Wen, Z., & Huang, T. S. (2002). Real-time speech-driven face animation with expressions using neural networks. IEEE Transactions on Neural Networks, 13(4), 916–927.
James, W. (1884). What is emotion? Mind, 9, 188–205.
Lewis, M., & Haviland-Jones, J. M. (2000). Handbook of emotions. New York: Guilford Press.
Massaro, D. W., Beskow, J., Cohen, M. M., Fry, C. L., & Rodriguez, T. (1999). Picture my voice: Audio to visual speech synthesis using artificial neural networks. In AV S P ' 9 9, Santa Cruz, CA (pp. 133–138).
Moriyama, T., & Ozawa, S. (1999). Emotion recognition and synthesis system on speech. In IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, Vol. 1, 840–844.
Mozziconacci, S. J. L., & Hermes, D. J. (2000). Expression of emotion and attitude through temporal speech variations. In 6th International Conference on Spoken Language Processing, IC-SLP2000, Beijing.
Oatley, K. (1987). Cognitive science and the understanding of emotions. Cognition and Emotion, 3(1), 209–216.
Ortony, A., Clore, G. L., & Collins A. (1990). The cognitive structure of emotions, Cambridge, UK: Cambridge University Press.
Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurements of meaning, Champaign, IL: University of Illinois Press.
Pavlovic, V. I., Sharma, R., & Huang T. S. (1997). Visual interpretation of hand gestures for human—computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 677–695.
Petrushin, V. A. (2000). Emotion recognition in speech signal: Experimental study, development and application. In 6th International Conference on Spoken Language Processing, ICSLP2000, Beijing (pp. 222–225).
Picard, R. W. (1997). Affective computing. Cambridge, MA: MIT Press.
Picard, R. W. (2003). Affective computing: Challenges. International Journal of Human—Computer Studies, 59(1–2), 55–64.
Rapaport, D. (1961). Emotions and memory. New York: Science Editions.
Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99, 143–165.
Schlossberg, H. (1954). Three dimensions of emotion. Psychological review, 61, 81–88.
Schröder, M. (2001). Emotional speech synthesis: A review. In Eurospeech 2001, Aalborg, Denmark (pp. 561–564).
Tao, J., & Tan, T. (2005). Affective computing: A review. In ACII 2005, 981–995.
Tato, R., Santos, R., Kompe, R., & Pardo, J. M. (2002). Emotional space improves emotion recognition. In ICSLP2002, Denver (pp. 2029–2032).
Thagard, P. (2005). MIND: Introduction to cognitive science, Cambridge, MA: MIT Press.
Verma, A., Subramaniam, L. V., & Rajput N., et al. (2004). Animating expressive faces across languages. IEEE Transactions on Multimedia, 6(6), 791– 800.
Yamamoto, E., Nakamura, S., and Shikano K. (1998). Lip movement synthesis from speech based on hidden Markov models. Speech Communication, 26, 105–115.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag London Limited
About this chapter
Cite this chapter
Tao, J., Tan, T. (2009). Introduction. In: Tao, J., Tan, T. (eds) Affective Information Processing. Springer, London. https://doi.org/10.1007/978-1-84800-306-4_1
Download citation
DOI: https://doi.org/10.1007/978-1-84800-306-4_1
Publisher Name: Springer, London
Print ISBN: 978-1-84800-305-7
Online ISBN: 978-1-84800-306-4
eBook Packages: Computer ScienceComputer Science (R0)