Towards an Architecture Combining Grounding and Planning for Human-Robot Interaction
We consider here the problem of connecting natural language to the physical world for robotic object manipulation. This problem needs to be solved in robotic reasoning systems so that the robot can act in the real world. In this paper, we propose an architecture that combines grounding and planning to enable robots to solve such a problem. The grounding system of the architecture grounds the meaning of a natural language sentence in physical environment perceived by the robot’s sensors and generates a knowledge base of the physical environment. Then the planning system utilizes the knowledge base to infer a plan for object manipulation, which can be effectively generated by an Answer Set Programming (ASP) planner. We evaluate the overall architecture on several datasets and a task of RoboCup2014@home (http://www.robocup2014.org/). The results show that the new architecture outperformed some other systems, and yielded acceptable performance in a real-world scenario.
This research is supported by the National Natural Science Foundation of China under grant 61175057 and the USTC Key-Direction Research Fund under grant WK0110000028.
- 3.Chen, X., Ji, J., Jiang, J., Jin, G., Wang, F., Xie, J.: Developing high-level cognitive functions for service robots. In: Proceedings of 9th International Conference on Autonomous Agents and Multi-agent Systems (2010)Google Scholar
- 4.Guadarrama, S., Riano, L., Golland, D., Gohring, D., Jia, Y., Klein, D., Abbeel, P., Darrell, T.: Grounding spatial relations for human-robot interaction. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2013)Google Scholar
- 5.Kollar, T., Tellex, S., Roy, D., Roy, N.: Toward understanding natural language directions. In: 5th ACM/IEEE International Conference on Human-Robot Interaction (2010)Google Scholar
- 6.Matuszek, C., FitzGerald, N., Zettlemoyer, L., Bo, L., Fox, D.: A joint model of language and perception for grounded attribute learning. In: International Conference on Machine Learning (2012)Google Scholar
- 7.Quigley, M., Ng, A.Y.: Stair: hardware and software architecture. In: AAAI 2007 Robotics Workshop (2007)Google Scholar
- 8.Rusu, R.B., Cousins, S.: 3d is here: point cloud library (pcl). In: Proceedings of the IEEE International Conference on Robotics and Automation (2011)Google Scholar
- 9.Rusu, R.B., Holzbach, A., Bradski, G., Beetz, M.: Detecting, segmenting objects for mobile manipulation. In: Proceedings of the 12th IEEE International Conference on Computer Vision: Workshop on Search in 3D and Video (2009)Google Scholar
- 10.Steedman, M.: Surface Structure and Interpretation. MIT Press, Cambridge (1996)Google Scholar
- 11.Tellex, S., Kollar, T., Dickerson, S., Walter, M., Banerjee, A., Teller, S., Roy, N.: Understanding natural language commands for robotic navigation and mobile manipulation. In: Proceedings of National Conference on Articial Intelligence (2011)Google Scholar
- 12.Zettlemoyer, L.S., Collins, M.: Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In: Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pp. 658–666 (2005)Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.