Skip to main content

Editorial: “Signals to Signs” – Feature Extraction, Recognition, and Multimodal Fusion

  • Chapter
  • First Online:
Emotion-Oriented Systems

Part of the book series: Cognitive Technologies ((COGTECH))

Abstract

Processing of recorded or real-time signals, feature extraction, and recognition are concepts of utmost important to an affect–aware and capable system, since they offer the opportunity to machines to benefit from modeling human behavior based on theory and interpret it based on observation. This chapter discusses feature extraction and recognition based on unimodal features in the case of speech, facial expressions and gestures, and physiological signals and elaborates on attention, fusion, dynamics, and adaptation in different multimodal cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body gesture, speech. In: Peter C, Beale R (eds) Affect and emotion in human–computer interaction. Springer, Heidelberg, pp 92–103

    Chapter  Google Scholar 

  • Chen T, Rao R (1998 May) Audio-visual integration in multimodal communication. Proc IEEE Spec Issue Multimedia Signal Process 86:837–852

    Google Scholar 

  • Christoudias M, Saenko K, Morency L-P, Darrell T (2006) Co-adaptation of audio-visual speech and gesture classifiers. In: Proceedings of the 8th international conference on multimodal interfaces, Banff, AB, Canada

    Google Scholar 

  • Gunes H, Piccardi M (2009) Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans Syst Man Cybern B 39(1):64–84

    Article  Google Scholar 

  • Karpouzis K, Kollias S (2007) Multimodality, universals, natural interaction, Humaine plenary presentation. http://tinyurl.com/humaine-context. Accessed 6 Jul 2009

  • Lewis JP, Parke FI (1986) Automated lip-synch and speech synthesis for character animation. SIGCHI Bull 17(May):143–147. doi: 10.1145/30851.30874 http://doi.acm.org/10.1145/30851.30874

  • McAllister DF, Rodman RD, Bitzer DL, Freeman AS (1997) Lip synchronization for animation. In: Proceedings of SIGGRAPH 97, Los Angeles, CA

    Google Scholar 

  • Morency L, Sidner C, Lee C, Darrell T (2007) Head gestures for perceptual interfaces: the role of context in improving recognition. Artif Intell 171(8–9):568–585

    Article  Google Scholar 

  • Onat S, Libertus K, Koenig P (2007) Integrating audiovisual information for the control of overt attention. J Vis 7(10):1–16

    Article  Google Scholar 

  • Oviatt S (1999) Ten myths of multimodal interaction. Commun ACM 42(11):74–81

    Article  Google Scholar 

  • Pandzic IS, Forchheimer R (eds) (2002) MPEG-4 facial animation – the standard, implementation and applications. Wiley, New York, NY. ISBN 0-470-84465-5

    Google Scholar 

  • Rogozan A (1999) Discriminative learning of visual data for audiovisual speech recognition. Int J Artif Intell Tools 8:43–52

    Article  Google Scholar 

  • Teissier P, Robert-Ribes J, Schwartz JL (1999) Comparing models for audiovisual fusion in a noisy-vowel recognition task. IEEE Trans Speech Audio Process 7:629–642

    Article  Google Scholar 

  • Vertegaal R, Slagter R, van der Veer G, Nijholt A (2001) Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In: Proceedings of the SIGCHI conference on human factors in computing systems, Seattle, Washington, DC, pp 301–308

    Google Scholar 

  • Vinciarelli A, Pantic M, Bourlard H (2009 November) Social signal processing: survey of an emerging domain. Image Vis Comput 27(12):1743–1759

    Article  Google Scholar 

  • Zoric G (2003) Real-time animation driven by human voice. In: Proceedings of ConTEL, Zagreb

    Google Scholar 

  • Zoric G, Pandzic I (2006) Real-time language independent lip synchronization method using a genetic algorithm. Signal Process 86(12):3644–3656

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kostas Karpouzis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Karpouzis, K. (2011). Editorial: “Signals to Signs” – Feature Extraction, Recognition, and Multimodal Fusion. In: Cowie, R., Pelachaud, C., Petta, P. (eds) Emotion-Oriented Systems. Cognitive Technologies. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15184-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15184-2_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15183-5

  • Online ISBN: 978-3-642-15184-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics