Language-Based Sensing Descriptors for Robot Object Grounding

  • Guglielmo GemignaniEmail author
  • Manuela Veloso
  • Daniele Nardi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9513)


In this work, we consider an autonomous robot that is required to understand commands given by a human through natural language. Specifically, we assume that this robot is provided with an internal representation of the environment. However, such a representation is unknown to the user. In this context, we address the problem of allowing a human to understand the robot internal representation through dialog. To this end, we introduce the concept of sensing descriptors. Such representations are used by the robot to recognize unknown object properties in the given commands and warn the user about them. Additionally, we show how these properties can be learned over time by leveraging past interactions in order to enhance the grounding capabilities of the robot.


Sensing descriptors Human-robot interaction Natural language processing 


  1. 1.
    Vogt, P.: The physical symbol grounding problem. Cogn. Syst. Res. 3(3), 429–457 (2002)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Winograd, T.: Procedures as a representation for data in a computer program for understanding natural language. Technical report (1971)Google Scholar
  3. 3.
    Kollar, T., Tellex, S., Roy, D., Roy N.: Toward understanding natural language directions. In: HRI (2010)Google Scholar
  4. 4.
    MacMahon, M., Stankiewicz, B., Kuipers, B.: Walk the talk: connecting language, knowledge, and action in route instructions. In: AAAI (2006)Google Scholar
  5. 5.
    Chernova, S., Orkin, J., Breazeal, C.: Crowdsourcing HRI through online multiplayer games. In: AAAI Fall Symposium on Dialog with Robots (2010)Google Scholar
  6. 6.
    Dzifcak, J., Scheutz, M., Baral, C., Schermerhorn, P.: What to do and how to do it: translating natural language directives into temporal and dynamic logic representation for goal management and action execution. In: ICRA (2009)Google Scholar
  7. 7.
    Zuo, X., Iwahashi, N., Taguchi, R., Funakoshi, K., Nakano, M., Matsuda, S., Sugiura, K., Oka, N.: Detecting robot-directed speech by situated understanding in object manipulation tasks. In: International Symposium of Robots and Human Interactive Communication (2010)Google Scholar
  8. 8.
    Spangenberg, M., Henrich, D.: Towards an intuitive interface for instructing robots handling tasks based on verbalized physical effects. In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication (2014)Google Scholar
  9. 9.
    Connell, J.H.: Extensible grounding of speech for robot instruction. In: Robots that Talk and Listen (2014)Google Scholar
  10. 10.
    Kollar, T., Tellex, S., Roy, N.: A discriminative model for understanding natural language route directions. In: AAAI Fall Symposium on Dialog with Robots (2010)Google Scholar
  11. 11.
    Kruijff, G., Zender, H., Jensfelt, P., Christensen, H.: Situated dialogue and spatial organization: what, where.. and why. Int. J. Adv. Robot. Syst. 4(2), 125–138 (2007)Google Scholar
  12. 12.
    Bastianelli, E., Bloisi, D.D., Capobianco, R., Cossu, F., Gemignani, G., Iocchi, L., Nardi, D.: On-line semantic mapping. In: ICAR (2013)Google Scholar
  13. 13.
    Bannat, A., Blume, J., Geiger, J.T., Rehrl, T., Wallhoff, F., Mayer, C., Radig, B., Sosnowski, S., Kühnlenz, K.: A multimodal human-robot-dialog applying emotional feedbacks. In: Ge, S.S., Li, H., Cabibihan, J.-J., Tan, Y.K. (eds.) ICSR 2010. LNCS, vol. 6414, pp. 1–10. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Stiefelhagen, R., Ekenel, H., Fugen, C., Gieselmann, P., Holzapfel, H., Kraft, F., Nickel, K., Voit, M., Waibel, A.: Enabling multimodal human-robot interaction for the karlsruhe humanoid robot. IEEE Trans. Rob. 23(4), 840–851 (2007)CrossRefGoogle Scholar
  15. 15.
    Nisimura, R., Uchida, T., Lee, A., Saruwatari, H., Shikano, K., Matsumoto, Y.: Aska: receptionist robot with speech dialogue system. In: International Conference on Intelligent Robots and Systems (2002)Google Scholar
  16. 16.
    Foster, M.E., Giuliani, M., Isard, A., Matheson, C., Oberlander, J., Knoll, A.: Evaluating description and reference strategies in a cooperative human-robot dialogue system. In: International Joint Conference on Artificial Intelligence (2009)Google Scholar
  17. 17.
    Tellex, S., Kollar, T., Dickerson, S., Walter, M.R., Banerjee, A.G., Teller, S.J., Roy, N.: Understanding natural language commands for robotic navigation and mobile manipulation. In: AAAI (2011)Google Scholar
  18. 18.
    Rybski, P., Yoon, K., Stolarz, J., Veloso, M.: Interactive robot task training through dialog and demonstration. In: HRI (2007)Google Scholar
  19. 19.
    Gemignani, G., Bastianelli, E., Nardi, D.: Teaching robots parametrized executable plans through spoken interaction. In: AAMAS (2015)Google Scholar
  20. 20.
    Gemignani, G., Klee, S.D., Nardi, D., Veloso, M.: On task recognition and generalization in long-term robot teaching. In: AAMAS (2015)Google Scholar
  21. 21.
    Lemaignan, S., Alami, R.: Talking to my robot: from knowledge grounding to dialogue processing. In: Human-Robot Interaction (2013)Google Scholar
  22. 22.
    Bastianelli, E., Bloisi, D.D., Capobianco, R., Cossu, F., Gemignani, G., Iocchi, L., and Nardi, D.: On-line semantic mapping. In: International Conference on Advanced Robotics (2013)Google Scholar
  23. 23.
    Kollar, T., Perera, V., Nardi, D., Veloso, M.: Learning environmental knowledge from task-based human-robot dialog. In: ICRA (2013)Google Scholar
  24. 24.
    Fillmore, C.J.: Frames and the semantics of understanding. Quad. di Semantica 6(2), 222–254 (1985)Google Scholar
  25. 25.
    Thomas, B.J., Jenkins, O.C.: Roboframenet: verb-centric semantics for actions in robot middleware. In: ICRA (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (, which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Guglielmo Gemignani
    • 1
    Email author
  • Manuela Veloso
    • 2
  • Daniele Nardi
    • 1
  1. 1.Department of Computer, Control, and Management Engineering “Antonio Ruberti”Sapienza University of RomeRomeItaly
  2. 2.Computer Science DepartmentCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations