Advertisement

The Willful Marionette: Modeling Social Cognition Using Gesture-Gesture Interaction Dialogue

  • Mohammad Mahzoon
  • Mary Lou MaherEmail author
  • Kazjon Grace
  • Lilla LoCurto
  • Bill Outcault
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9744)

Abstract

In this paper we describe a cognitive model for provoking gestural dialogue with humans, embodied in an interactive marionette. The cognitive model is a framework for the design and implementation of a gesture to gesture interaction. The marionette perceives gestures of humans using a Microsoft Kinect, reasons about perceived gestures to determine a response, and then performs the selected response gesture. This simple cognitive model: perceive-reason-perform, operates in a social context where humans interact with the marionette. The marionette was built as a 3D replica of a human body. The marionette’s responses were designed using interaction design techniques such as body storming, gesture elicitation, and the “Wizard of Oz” method to provoke an emotional response from humans. Several user studies were conducted during and after the design process to guide the design goal of achieving an engaging and provocative interaction. These studies showed that participants were encouraged to engage in a gesture-based dialogue with the marionette, and that they perceived the system to possess a kind of intelligence.

Keywords

Marionette Gesture Human-computer interaction 

Notes

Acknowledgments

We would like to thank the following for their financial support for this project: Yi Deng, Dean of the College of Computing and Informatics at the University of North Carolina at Charlotte (UNCC); William Ribarsky, former Chair of the Department of Computer Science at UNCC; and Ken Lambla, Dean of the College of Art and Architecture at UNCC. We thank the many people that participated in the design and implementation of Little Bill: Alexander Adams, Trevor Hess, Yueqi Hu, Lina Lee, Steph Grace, and Katy Gero. We truly appreciate their help throughout the project.

References

  1. Ackad, C., Kay, J., Tomitsch, M.: Towards learnable gestures for exploring hierarchical information spaces at a large public display. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  2. Anderson, M.L.: Embodied cognition: a field guide. Artif. Intell. 149(1), 91–130 (2003)CrossRefGoogle Scholar
  3. Baber, C.: Objects as agents: how ergotic and epistemic gestures could benefit gesture-based interaction. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  4. Card, S.K.: A simple universal gesture scheme for user interfaces. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  5. Cartmill, E.A., Beilock, S., Goldin-Meadow, S.: A word in the hand: action, gesture and mental representation in humans and non-human primates. Philos. Trans. Roy. Soc. B: Biol. Sci. 129–143 (2012)Google Scholar
  6. Chen, I.M., Xing, S., Tay, R., Yeo, S.H.: Many strings attached: from conventional to robotic marionette manipulation. IEEE Robot. Autom. Mag. 59–74 (2005)Google Scholar
  7. Hemami, H., Dinneen, J.A.: A marionette-based strategy for stable movement. IEEE Syst. Man Cybern. 502–511 (1993)Google Scholar
  8. Hirai, K., Hirose, M., Haikawa, Y., Takenaka, T.: The development of Honda humanoid robot. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA 1998), pp. 1321–1326 (1998)Google Scholar
  9. Hoffmann, G.: Teach-in of a robot by showing the motion. In: IEEE Proceedings of Image Processing, pp. 529–532. IEEE (1996)Google Scholar
  10. Jacob, M., Coisne, G., Gupta, A., Sysoev, I., Verma, G.G., Magerko, B.: Viewpoints AI. In: AIIDE (2013)Google Scholar
  11. Jetter, H.C.: A cognitive perspective on gestures, manipulations, and space in future multi-device interaction. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  12. Kaneko, K., Kanehiro, F., Morisawa, M., Miura, K., Nakaoka, S., Kajita, S.: Cybernetic human HRP-4C. In: 9th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2009, pp. 7–14 (2009)Google Scholar
  13. Karam, M.: A taxonomy of gestures in human computer interactions (2005)Google Scholar
  14. Maher, M.L., Clausner, T.C., Gonzalez, B., Grace, K.: Gesture in the crossroads of HCI and creative cognition. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  15. Marsella, S.C., Gratch, J.: EMA: a process model of appraisal dynamics. Cogn. Syst. Res. 10(1), 70–90 (2009)CrossRefGoogle Scholar
  16. Picard, R.W., Picard, R.: Affective Computing, vol. 252. MIT Press, Cambridge (1997)Google Scholar
  17. Robert, D., Wistorrt, R., Gray, J., Breazeal, C.: Exploring mixed reality robot gaming. In: Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 125–128. ACM (2011)Google Scholar
  18. Sandini, G., Metta, G., Vernon, D.: The iCub cognitive humanoid robot: an open-system research platform for enactive cognition. In: Lungarella, M., Iida, F., Bongard, J., Pfeifer, R. (eds.) 50 Years of Artificial Intelligence. Lecture Notes in Computer Science, vol. 4850, pp. 358–369. Springer, Berlin (2007)CrossRefGoogle Scholar
  19. Seyed, T., Burns, C., Costa Sousa, M., Maurer, F., Tang, A.: Eliciting usable gestures for multi-display environments. In: Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, pp. 41–50. ACM (2012)Google Scholar
  20. Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artif. Intell. 140–164 (2005)Google Scholar
  21. Speed, C., Pschetz, L., Oberlander, J., Papadopoulos-Korfiatis, A.: Dancing robots. In: Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction, pp. 353–356. ACM (2014)Google Scholar
  22. Tversky, B., Jamalian, A., Segal, A., Giardino, V., Kang, S.M.: Congruent gestures can promote thought. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  23. Vanacken, D., Beznosyk, A., Coninx, K.: Help systems for gestural interfaces and their effect on collaboration and communication. In: Gesture-Based Interaction Design: Communication and Cognition, CHI 2014 Workshop (2014)Google Scholar
  24. Weizenbaum, J.: ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9(1), 36–45 (1966)CrossRefGoogle Scholar
  25. Yamane, K., Hodgins, J.K., Brown, H.B.: Controlling a motorized marionette with human motion capture data. Int. J. Humanoid Robot. pp. 651–669 (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Mohammad Mahzoon
    • 1
  • Mary Lou Maher
    • 1
    Email author
  • Kazjon Grace
    • 1
  • Lilla LoCurto
    • 1
  • Bill Outcault
    • 1
  1. 1.University of North Carolina at CharlotteCharlotteUSA

Personalised recommendations