Marve: A Prototype Virtual Human Interface Framework for Studying Human-Virtual Human Interaction
Human to virtual human interaction is the next frontier in interface design, particularly for tasks that are social or collaborative in nature. Several embodied interface agents have been developed for specific social, place-related tasks, but empirical evaluations of these systems have been rare. In this work, we present Marve (Messaging And Recognition Virtual Entity), our general purpose Virtual Human Interface Framework, which integrates cutting-edge interface technologies into a seamless real-time system, to study human to virtual human interaction. Marve is a prototype of a real-time embodied, interactive, autonomous, virtual human interface agent framework. Marve “lives” next to the primary entrance of the Future Computing Lab. His primary tasks are to greet everyone who enters or leaves the lab, and to take and deliver messages to the students and faculty who work there. Marve uses computer vision techniques for passer-by detection, gaze tracking, and face recognition, and communicates via natural language. We present a preliminary empirical study of the basic elements of Marve, including interaction response times, recognition of friends, and ability to learn to recognize new people.
Unable to display preview. Download preview PDF.
- 1.Isbester, K., Doyle, P.: Design and Evaluation of Embodied Conversational Agents: A Proposed Taxonomy. In: AAMAS Workshop: Embodied Conversatinal Agents (2002)Google Scholar
- 2.Nass, C., Steuer, J., Tauber, E.: Computers are social actors. In: Proceedings of CHI 1994, Boston MA (1994)Google Scholar
- 3.Catrambone, R., Stasko, J., Xiao, J.: Anthropomorphic Agents as a User Interface Paradigm: Experimental Findings and a Framework for Research. In: Proceedings of CogSci 2002, pp. 166–171 (2002)Google Scholar
- 4.Thorisson, K.: Real-time decision making in multimodal face-to-face communication. In: Proceedings of the Second International Conference on Autonomous Agents, Minneapolis, MN, pp. 16–23 (1998)Google Scholar
- 6.Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
- 7.Badler, N., Bindiganavale, R., Allbeck, J., Schuler, W., Zhao, L., Lee, S., Shin, H., Palmer, M.: Parameterized action representation and natural language instructions for dynamic behavior modification of embodied agents. AAAI, Menlo Park (1999)Google Scholar
- 8.Evers, M., Nijholt, A.: Jacob – An animated instruction agent in virtual reality. In: Proceedings 3rd International Conference on Multimodal Interfaces (ICMI 2000), pp. 526–533 (2000)Google Scholar
- 9.Thalmann, D.: The Virtual Human as a Multimodal Interface. Proceedings of Advanced visual interfaces, 14–20 (2000)Google Scholar
- 10.Ulicny, B., Thalmann, D.: Crowd simulation for virtual heritage. In: Proceedings of the First International Workshop on 3D Virtual Heritage, pp. 28–32 (2002)Google Scholar
- 11.Brave, S., Nass, C.: Emotion in human-computer interaction. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Hillside, NJ, ch. 4 (2002)Google Scholar
- 13.Nefian, A.V., Hayes, M.H.: An Embedded HMM-Based Approach for Face Detection and Recognition. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 6, pp. 3553–3556 (1999)Google Scholar
- 15.Sukthankar, R., Stockton, R.: Argus: The Digital Doorman. In: Proceedings of IEEE International Conference on Intelligent Systems, pp. 14–19 (2001)Google Scholar
- 18.Takeuchi, Y., Katagiri, Y.: Social Character Design for Animated Agents. RO-MAN 1999 (1999)Google Scholar