Integrating Language, Vision and Action for Human Robot Dialog Systems

  • Markus Rickert
  • Mary Ellen Foster
  • Manuel Giuliani
  • Tomas By
  • Giorgio Panin
  • Alois Knoll
Conference paper

DOI: 10.1007/978-3-540-73281-5_108

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4555)
Cite this paper as:
Rickert M., Foster M.E., Giuliani M., By T., Panin G., Knoll A. (2007) Integrating Language, Vision and Action for Human Robot Dialog Systems. In: Stephanidis C. (eds) Universal Access in Human-Computer Interaction. Ambient Interaction. UAHCI 2007. Lecture Notes in Computer Science, vol 4555. Springer, Berlin, Heidelberg

Abstract

Developing a robot system that can interact directly with a human instructor in a natural way requires not only highly-skilled sensorimotor coordination and action planning on the part of the robot, but also the ability to understand and communicate with a human being in many modalities. A typical application of such a system is interactive assembly for construction tasks. A human communicator sharing a common view of the work area with the robot system instructs the latter by speaking to it in the same way that he would communicate with a human partner.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Markus Rickert
    • 1
  • Mary Ellen Foster
    • 1
  • Manuel Giuliani
    • 1
  • Tomas By
    • 1
  • Giorgio Panin
    • 1
  • Alois Knoll
    • 1
  1. 1.Robotics and Embedded Systems Group, Department of Informatics, Technische Universität München, Boltzmannstraße 3, D-85748 Garching bei MünchenGermany

Personalised recommendations