Skip to main content

Attainable Digital Embodied Storytelling Using State of the Art Tools, and a Little Touch

  • Conference paper
  • First Online:
Social Robotics (ICSR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14453 ))

Included in the following conference series:

  • 344 Accesses

Abstract

How closely can a robot capable of generating non-verbal behavior approximate a human narrator? What are the missing features that limit the naturalness and expressiveness of the robot as a storyteller? In this paper we explore this topic by identifying the key aspects to effectively convey the content of a story and therefore by analysing appropriate methodologies and tools that allow to automatically enrich the expressiveness of the generated behavior. Lastly, we will explore some modifications to weigh up the gap between robot and human-like behavior. Demonstration videos reveal that albeit the communicative capabilities of the robot are appropriate, there is still room for improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.pillarlearning.com/products/codi.

  2. 2.

    https://www.kickstarter.com/projects/trobo/trobo-the-storytelling-robot/.

  3. 3.

    https://ww.emys.co.

  4. 4.

    https://docs.nvidia.com/deeplearning/riva/user-guide/docs/tts/tts-overview.html.

  5. 5.

    https://cloud.google.com/text-to-speech.

  6. 6.

    BioVision Hierarchy (BVH) is a file format used for storing motion capture data.

  7. 7.

    www.jaracuentacuentos.com.

  8. 8.

    The different voices have been manually annotated in the SSML file.

  9. 9.

    The Enchanted Forest Tacotron+Emotions: https://youtu.be/J4iHMcz_ODg.

  10. 10.

    The Sad Tree Tacotron+Emotions: https://youtu.be/CDirRQ8ccoo.

  11. 11.

    https://openai.com/research/whisper.

  12. 12.

    The Enchanted Forest Hybrid+Emotions: https://youtu.be/ZVIPQV1ZcpQ.

  13. 13.

    The Sad Tree Hybrid+Emotions: https://youtu.be/VyUY-jp2CFM.

  14. 14.

    The Enchanted Forest Touch+Emotions: https://youtu.be/zSvogw0OJT0.

  15. 15.

    The Sad Tree Touch+Emotions: https://youtu.be/8hPEa_Ls0vY.

References

  1. Appel, M., Lugrin, B., Kühle, M., Heindl, C.: The emotional robotic storyteller: on the influence of affect congruency on narrative transportation, robot perception, and persuasion. Comput. Hum. Behav. 120, 106749 (2021). https://doi.org/10.1016/j.chb.2021.106749

  2. Augello, A., Pilato, G.: An annotated corpus of stories and gestures for a robotic storyteller. In: Third IEEE International Conference on Robotic Computing (IRC), pp. 630–635 (2019). https://doi.org/10.1109/IRC.2019.00127

  3. Bravo, F.A., Hurtado, J.A., González, E.: Using robots with storytelling and drama activities in science education. Educ. Sci. 11(7) (2021). https://doi.org/10.3390/educsci11070329

  4. Carolis, B., D’Errico, F., Rossano, V.: Pepper as a Storyteller: Exploring the Effect of Human vs. Robot Voice on Children’s Emotional Experience, pp. 471–480. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85616-8-27

  5. Chang, C.J., Zhang, S., Kapadia, M.: The IVI lab entry to the Genea challenge 2022 - a tacotron2 based method for co-speech gesture generation with locality-constraint attention mechanism. In: Proceedings of the 2022 International Conference on Multimodal Interaction (ICMI 2022), pp. 784–789. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3536221.3558060

  6. Chella, A., Barone, R.E., Pilato, G., Sorbello, R.: An emotional storyteller robot. In: AAAI Spring Symposium (2008)

    Google Scholar 

  7. Ekman, P.: Are there basic emotions? Psychol. Rev. 99(3), 550–553 (1992). https://doi.org/10.1037/0033-295X.99.3.550

    Article  Google Scholar 

  8. Glanz, I., Weksler, M., Karpas, E., Horowitz-Kraus, T.: Robofriend: An Adpative Storytelling Robotic Teddy Bear - Technical Report. arXiv e-prints arXiv:2301.01576 (2023). https://doi.org/10.48550/arXiv.2301.01576

  9. Gomez, R., et al.: Exploring affective storytelling with an embodied agent. In: 30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1249–1255 (2021). https://doi.org/10.1109/RO-MAN50785.2021.9515323

  10. Gray, C., et al.: This bot knows what i’m talking about! human-inspired laughter classification methods for adaptive robotic comedians. In: 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1007–1014 (2022). https://doi.org/10.1109/RO-MAN53752.2022.9900634

  11. Ham, J., Cuijpers, R.H., Cabibihan, J.J.: Combining robotic persuasive strategies: the persuasive power of a storytelling robot that uses gazing and gestures. Int. J. Soc. Robot. 7, 479–487 (2015)

    Article  Google Scholar 

  12. Huang, C.M., Mutlu, B.: Modeling and evaluating narrative gestures for humanlike robots. In: Robotics: Science and Systems (2013)

    Google Scholar 

  13. Kendon, A.: Gesture: Visible Action as Utterance. Cambridge University Press (2004)

    Google Scholar 

  14. Kory Westlund, J., et al.: Flat vs. expressive storytelling: young children’s learning and retention of a social robot’s narrative. Front. Hum. Neurosci. 11 (2017). https://doi.org/10.3389/fnhum.2017.00295

  15. Krcadinac, U., Pasquier, P., Jovanovic, J., Devedzic, V.: Synesketch: an open source library for sentence-based emotion recognition. IEEE Trans. Affect. Comput. 4(3), 312–325 (2013). https://doi.org/10.1109/T-AFFC.2013.18

    Article  Google Scholar 

  16. Lee, G., Deng, Z., Ma, S., Shiratori, T., Srinivasa, S., Sheikh, Y.: Talking with hands 16.2m: a large-scale dataset of synchronized body-finger motion and audio for conversational motion analysis and synthesis. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 763–772 (2019). https://doi.org/10.1109/ICCV.2019.00085

  17. Mutlu, B., Forlizzi, J., Hodgins, J.K.: A storytelling robot: modeling and evaluation of human-like gaze behavior. In: 2006 6th IEEE-RAS International Conference on Humanoid Robots, pp. 518–523 (2006)

    Google Scholar 

  18. Nichols, E., Gao, L., Vasylkiv, Y., Gomez, R.: Collaborative storytelling with social robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1903–1910 (2021). https://doi.org/10.1109/IROS51168.2021.9636409

  19. Nichols, E., Szapiro, D., Vasylkiv, Y., Gomez, R.: I can’t believe that happened! : exploring expressivity in collaborative storytelling with the tabletop robot haru. In: 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 59–59 (2022). https://doi.org/10.1109/RO-MAN53752.2022.9900606

  20. Nyatsanga, S., Kucherenko, T., Ahuja, C., Henter, G.E., Neff, M.: A comprehensive review of data-driven co-speech gesture generation. Comput. Graph. Forum (2023). https://doi.org/10.1111/cgf.14776

    Article  Google Scholar 

  21. Ozaeta, L., Perez, I., Rekalde, I.: Interactive storytelling for the retelling of autobiographical memory in children: a social robotics approach. Glob. J. Inf. Technol. Emerg. Technol. 12, 43–50 (2022). https://doi.org/10.18844/gjit.v12i1.7111

  22. Paradeda, R., Ferreira, M.J., Martinho, C., Paiva, A.: Communicating assertiveness in robotic storytellers. In: Rouse, R., Koenitz, H., Haahr, M. (eds.) ICIDS 2018. LNCS, vol. 11318, pp. 442–452. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04028-4_51

  23. Park, H.W., Grover, I., Spaulding, S., Gomez, L., Breazeal, C.: A model-free affective reinforcement learning approach to personalization of an autonomous social robot companion for early literacy education. In: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI 2019/IAAI2019/EAAI2019). AAAI Press (2019). https://doi.org/10.1609/aaai.v33i01.3301687

  24. Striepe, H., Donnermann, M., Lein, M., Lugrin, B.: Modeling and evaluating emotion, contextual head movement and voices for a social robot storyteller. Int. J. Soc. Robot. 13(3), 441–457 (2019). https://doi.org/10.1007/s12369-019-00570-7

    Article  Google Scholar 

  25. Striepe, H., Donnermann, M., Lein, M., Lugrin, B.: Modeling and evaluating emotion, contextual head movement and voices for a social robot storyteller. Int. J. Soc. Robot. 13(3), 441–457 (2019). https://doi.org/10.1007/s12369-019-00570-7

    Article  Google Scholar 

  26. Wang, H., et al.: Personalized storytelling with social robot Haru. In: Cavallo, F., et al. (eds.) Social Robotics: 14th International Conference, ICSR 2022, Florence, 13–16 December 2022, Proceedings, Part II, pp. 439–451. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-24670-8_39

  27. Weber, K., Ritschel, H., Aslan, I., Lingenfelser, F., André, E.: How to shape the humor of a robot - social behavior adaptation based on reinforcement learning. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI 2018), pp. 154–162. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3242969.3242976

  28. Wicke, P., Veale, T.: Storytelling by a show of hands: a framework for interactive embodied storytelling in robotic agents. In: AISB - Artificial Intelligence and Simulated Behavior (2018)

    Google Scholar 

  29. Wicke, P., Veale, T.: The show must go on: on the use of embodiment, space and gesture in computational storytelling. N. Gener. Comput. 38(4), 565–592 (2020). https://doi.org/10.1007/s00354-020-00106-y

    Article  Google Scholar 

  30. Xu, J., Broekens, J., Hindriks, K., Neerincx, M.A.: Effects of a robotic storyteller’s moody gestures on storytelling perception. In: International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 449–455 (2015). https://doi.org/10.1109/ACII.2015.7344609

  31. Zabala, U., Rodriguez, I., Lazkano, E.: Towards an automatic generation of natural gestures for a storyteller robot. In: 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1209–1215 (2022). https://doi.org/10.1109/RO-MAN53752.2022.9900532

Download references

Acknowledgment

This work has been partially supported by the Basque Government, under Grant number IT1427-22; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER) under Grant number PID2021-122402OBC21 (MCIU/AEI/FEDER, UE). Author Zabala has received a PREDOKBERRI research Grant from Basque Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elena Lazkano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zabala, U., Diez, A., Rodriguez, I., Augello, A., Lazkano, E. (2024). Attainable Digital Embodied Storytelling Using State of the Art Tools, and a Little Touch. In: Ali, A.A., et al. Social Robotics. ICSR 2023. Lecture Notes in Computer Science(), vol 14453 . Springer, Singapore. https://doi.org/10.1007/978-981-99-8715-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8715-3_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8714-6

  • Online ISBN: 978-981-99-8715-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics