Marve: A Prototype Virtual Human Interface Framework for Studying Human-Virtual Human Interaction

  • Sabarish Babu
  • Stephen Schmugge
  • Raj Inugala
  • Srinivasa Rao
  • Tiffany Barnes
  • Larry F. Hodges
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3661)

Abstract

Human to virtual human interaction is the next frontier in interface design, particularly for tasks that are social or collaborative in nature. Several embodied interface agents have been developed for specific social, place-related tasks, but empirical evaluations of these systems have been rare. In this work, we present Marve (Messaging And Recognition Virtual Entity), our general purpose Virtual Human Interface Framework, which integrates cutting-edge interface technologies into a seamless real-time system, to study human to virtual human interaction. Marve is a prototype of a real-time embodied, interactive, autonomous, virtual human interface agent framework. Marve “lives” next to the primary entrance of the Future Computing Lab. His primary tasks are to greet everyone who enters or leaves the lab, and to take and deliver messages to the students and faculty who work there. Marve uses computer vision techniques for passer-by detection, gaze tracking, and face recognition, and communicates via natural language. We present a preliminary empirical study of the basic elements of Marve, including interaction response times, recognition of friends, and ability to learn to recognize new people.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Isbester, K., Doyle, P.: Design and Evaluation of Embodied Conversational Agents: A Proposed Taxonomy. In: AAMAS Workshop: Embodied Conversatinal Agents (2002)Google Scholar
  2. 2.
    Nass, C., Steuer, J., Tauber, E.: Computers are social actors. In: Proceedings of CHI 1994, Boston MA (1994)Google Scholar
  3. 3.
    Catrambone, R., Stasko, J., Xiao, J.: Anthropomorphic Agents as a User Interface Paradigm: Experimental Findings and a Framework for Research. In: Proceedings of CogSci 2002, pp. 166–171 (2002)Google Scholar
  4. 4.
    Thorisson, K.: Real-time decision making in multimodal face-to-face communication. In: Proceedings of the Second International Conference on Autonomous Agents, Minneapolis, MN, pp. 16–23 (1998)Google Scholar
  5. 5.
    Cassell, J.: Embodied conversational interface agents. Communications of ACM 43, 70–78 (2000)CrossRefGoogle Scholar
  6. 6.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  7. 7.
    Badler, N., Bindiganavale, R., Allbeck, J., Schuler, W., Zhao, L., Lee, S., Shin, H., Palmer, M.: Parameterized action representation and natural language instructions for dynamic behavior modification of embodied agents. AAAI, Menlo Park (1999)Google Scholar
  8. 8.
    Evers, M., Nijholt, A.: Jacob – An animated instruction agent in virtual reality. In: Proceedings 3rd International Conference on Multimodal Interfaces (ICMI 2000), pp. 526–533 (2000)Google Scholar
  9. 9.
    Thalmann, D.: The Virtual Human as a Multimodal Interface. Proceedings of Advanced visual interfaces, 14–20 (2000)Google Scholar
  10. 10.
    Ulicny, B., Thalmann, D.: Crowd simulation for virtual heritage. In: Proceedings of the First International Workshop on 3D Virtual Heritage, pp. 28–32 (2002)Google Scholar
  11. 11.
    Brave, S., Nass, C.: Emotion in human-computer interaction. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Hillside, NJ, ch. 4 (2002)Google Scholar
  12. 12.
    Taylor II, R.M., Hudson, T.C., Seeger, A., Webber, H., Juliano, J., Helser, A.T.: VRPN: a device-independent, network-transparent VR peripheral system. In: Proceedings of ACM symposium on Virtual reality software and technology, pp. 55–61. ACM Press, New York (2001)CrossRefGoogle Scholar
  13. 13.
    Nefian, A.V., Hayes, M.H.: An Embedded HMM-Based Approach for Face Detection and Recognition. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 6, pp. 3553–3556 (1999)Google Scholar
  14. 14.
  15. 15.
    Sukthankar, R., Stockton, R.: Argus: The Digital Doorman. In: Proceedings of IEEE International Conference on Intelligent Systems, pp. 14–19 (2001)Google Scholar
  16. 16.
  17. 17.
  18. 18.
    Takeuchi, Y., Katagiri, Y.: Social Character Design for Animated Agents. RO-MAN 1999 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Sabarish Babu
    • 1
  • Stephen Schmugge
    • 1
  • Raj Inugala
    • 1
  • Srinivasa Rao
    • 1
  • Tiffany Barnes
    • 1
  • Larry F. Hodges
    • 1
  1. 1.Department of Computer ScienceUniversity of North Carolina at CharlotteCharlotte

Personalised recommendations