Skip to main content

EMOGIB: Emotional Gibberish Speech Database for Affective Human-Robot Interaction

  • Conference paper
Affective Computing and Intelligent Interaction (ACII 2011)

Abstract

Gibberish speech consists of vocalizations of meaningless strings of speech sounds. It is sometimes used by performing artists or by cartoon animations (e.g.: Teletubbies) to express intended emotions, without pronouncing any actually understandable word. The facts that no understandable text has to be pronounced and that only affect is conveyed create the advantage of gibberish in affective computing. In our study, we intend to experiment the communication between a robot and hospitalized children using affective gibberish. In this study, a new emotional database consisting of 4 distinct corpuses has been recorded for the purpose of affective child-robot interaction. The database comprises speech recordings of one actress simulating a neutral state and the big six emotions: anger, disgust, fear, happiness, sadness and surprise. The database has been evaluated through a perceptual test for all subsets of the database by adults and one subset of the database with children, achieving recognition scores up to 81%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Douglas-Cowie, E., Campbell, N., Cowie, R., Roach, P.: Emotional speech:Towards a new generation of databases. Speech Communication 40, 33–60 (2003)

    Article  MATH  Google Scholar 

  2. Busso, C., Narayanan, S.: Recording audio-visual emotional databases from actors: a closer look. In: Second International Workshop on Emotion: Corpora for Research on Emotion and Affect, International Conference on Language Resources and Evaluation - LREC (2008)

    Google Scholar 

  3. Wilson, D.E., Reeder, D.A.M.: Mammal Species of the World: A Taxonomic and Geographic Reference. Johns Hopkins University Press, Baltimore (2005)

    Google Scholar 

  4. Yilmazyildiz, S., Latacz, L., Mattheyses, W., Verhelst, W.: Expressive Gibberish Speech Synthesis for Affective Human-Computer Interaction. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2010. LNCS, vol. 6231, pp. 584–590. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  5. Hart, M.: Project Gutenberg (2003), http://www.gutenberg.org

  6. ETRO Audio-Visual Lab, http://www.etro.vub.ac.be/Research/Nosey_Elephant_Studios/

  7. Yilmazyildiz, S., Henderickx, D., Vanderborght, B., Verhelst, W., Soetens, E., Lefeber, D.: Multi-Modal Emotion Expression for Affective Human-Robot Interaction (paper submitted)

    Google Scholar 

  8. Carlson, R., Granström, B., Nord, I.: Segmental Evaluation Using the Esprit/SAM Test Procedures and Mono-syllabic Words. In: Bailly, G., Benont, C. (eds.) Talking Machines, pp. 443–453 (1990)

    Google Scholar 

  9. Yilmazyildiz, S., Mattheyses, W., Patsis, Y., Verhelst, W.: Expressive Speech Recognition and Synthesis as Enabling Technologies for Affective Robot-Child Communication. In: Zhuang, Y., Yang, S., Rui, Y., He, Q. (eds.) PCM 2006. LNCS, vol. 4261, pp. 1–8. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  10. Oudeyer, P.Y.: The Synthesis of Cartoon Emotional Speech. In: International Conference on Prosody, pp. 551–554. Aix-en-Provence, France (2002)

    Google Scholar 

  11. Breazeal, C.: Sociable Machines: Expressive Social Exchanges Between Humans and Robots. PhD thesis, MIT AI Lab (2000)

    Google Scholar 

  12. Saldien, J., Goris, K., Yilmazyildiz, S., Verhelst, W., Lefeber, D.: On the design of the huggable robot Probo. Journal of Physical Agents, Special Issue on Human Interaction with Domestic Robots 2 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yilmazyildiz, S., Henderickx, D., Vanderborght, B., Verhelst, W., Soetens, E., Lefeber, D. (2011). EMOGIB: Emotional Gibberish Speech Database for Affective Human-Robot Interaction. In: D’Mello, S., Graesser, A., Schuller, B., Martin, JC. (eds) Affective Computing and Intelligent Interaction. ACII 2011. Lecture Notes in Computer Science, vol 6975. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24571-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24571-8_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24570-1

  • Online ISBN: 978-3-642-24571-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics