Advertisement

MAWARI: A Social Interface to Reduce the Workload of the Conversation

  • Yuta Yoshiike
  • P. Ravindra S. De Silva
  • Michio Okada
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7072)

Abstract

In this paper, we propose a MAWARI-based social interface as an interactive social medium to broadcast information (e.g. news, etc.). The interface consists of three sociable creatures (MAWARIs) and is designed with minimalism designing concepts. MAWARI is a small scale robot that has only body gestures to express (or interact) its attractive social cues. This helps to reduce the observing workload of the user, because during the conversation the creature just uses vocal interactions mixed with attractive body gestures. The proposed interface is capable of behaving in two kinds of states: behavior as a passive social interface and also as an interactive social interface in order to reduce the conversation workload of the participant.

Keywords

Minimalism designing passive social interface multiparty-discourse conversational workload 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bortolussi, M.R., Kantowitz, B.H., Hart, S.G.: Measuring pilot workload in a motion base trainer: A comparison of four techniques. Applied Ergonomics 17(4), 278–283 (1986)CrossRefGoogle Scholar
  2. 2.
    Byers, J.C., Bittner, A.C., Hill, S.G., Zaklad, A.L., Christ, R.E.: Workload assessment of a remotely piloted vehicle (rpv) system. In: Proceedings of the Human Factors Society Thirty-Second Annual Meeting, pp. 1145–1149 (1988)Google Scholar
  3. 3.
    Duncan, S.: Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology 23, 283–292 (1972)CrossRefGoogle Scholar
  4. 4.
    Goffman, E.: Footing. Semiotica 25, 1–29 (1979)CrossRefGoogle Scholar
  5. 5.
    Hart, S.G., Stavenland, L.E.: Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In: Hancock, P.A., Meshkati, N. (eds.) Human Mental Workload, ch. 7, pp. 139–183. Elsevier (1988)Google Scholar
  6. 6.
    Herbert, T.B.C., Clark, H.: Hearers and speech acts. Language 58, 332–373 (1982)CrossRefGoogle Scholar
  7. 7.
    Kendon, A.: Some functions of gaze-direction in social interaction. Acta Psychologica 26, 22–63 (1967)CrossRefGoogle Scholar
  8. 8.
    Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: HRI 2009: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 61–68. ACM, New York (2009)Google Scholar
  9. 9.
    Nakano, Y.I., Ishii, R.: Estimating user’s engagement from eye-gaze behaviors in human-agent conversations. In: IUI 2010: Proceeding of the 14th International Conference on Intelligent User Interfaces, pp. 139–148. ACM, New York (2010)Google Scholar
  10. 10.
    Okada, M., Sakamoto, S., Suzuki, N.: Muu: Artificial creatures as an embodied interface. In: 27th International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000), The Emerging Technologies: Point of Departure (2000)Google Scholar
  11. 11.
    Rehmann, A.J.: Handbook of human performance measures and crew requirements for flightdeck researchGoogle Scholar
  12. 12.
    Sakamoto, D., Hayashi, K., Kanda, T., Shiomi, M., Koizumi, S., Ishiguro, H., Ogasawara, T., Hagita, N.: Humanoid robots as a broadcasting communication medium in open public spaces. I. J. Social Robotics 1(2), 157–169 (2009)CrossRefGoogle Scholar
  13. 13.
    Suzuki, N., Takeuchi, Y., Ishii, K., Okada, M.: Talking eye: Autonomous creatures for augmented chatting. Robotics and Autonomous Systems 31, 171–184 (2000)CrossRefGoogle Scholar
  14. 14.
    Thrun, S., Bennewitz, M., Burgard, W., Cremers, A.B., Dellaert, F., Fox, D., Hähnel, D., Rosenberg, C.R., Roy, N., Schulte, J., Schulz, D.: Minerva: A second-generation museum tour-guide robot. In: ICRA, pp. 1999–2005 (1999)Google Scholar
  15. 15.
    Trafton, J.G., Bugajska, M.D., Ratwani, R.M., Fransen, B.R.: Integrating vision and audition within a cognitive architecture to track conversations. In: Proceedings of the Human Robot Interaction, HRI (2008)Google Scholar
  16. 16.
    Vertegaal, R., Slagter, R., et al.: Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes, pp. 301–308. ACM Press (2001)Google Scholar
  17. 17.
    Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., Miyamoto, T.: Responsive robot gaze to interaction partner. In: Proceedings of Robotics: Science and Systems (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yuta Yoshiike
    • 1
  • P. Ravindra S. De Silva
    • 1
  • Michio Okada
    • 1
  1. 1.Interactions and Communication Design LabToyohashi University of TechnologyToyohashiJapan

Personalised recommendations