Grounding natural language instructions to semantic goal representations for abstraction and generalization
- 128 Downloads
Language grounding is broadly defined as the problem of mapping natural language instructions to robot behavior. To truly be effective, these language grounding systems must be accurate in their selection of behavior, efficient in the robot’s realization of that selected behavior, and capable of generalizing beyond commands and environment configurations only seen at training time. One choice that is crucial to the success of a language grounding model is the choice of representation used to capture the objective specified by the input command. Prior work has been varied in its use of explicit goal representations, with some approaches lacking a representation altogether, resulting in models that infer whole sequences of robot actions, while other approaches map to carefully constructed logical form representations. While many of the models in either category are reasonably accurate, they fail to offer either efficient execution or any generalization without requiring a large amount of manual specification. In this work, we take a first step towards language grounding models that excel across accuracy, efficiency, and generalization through the construction of simple, semantic goal representations within Markov decision processes. We propose two related semantic goal representations that take advantage of the hierarchical structure of tasks and the compositional nature of language respectively, and present multiple grounding models for each. We validate these ideas empirically with results collected from following text instructions within a simulated mobile-manipulator domain, as well as demonstrations of a physical robot responding to spoken instructions in real time. Our grounding models tie abstraction in language commands to a hierarchical planner for the robot’s execution, enabling a response-time speed-up of several orders of magnitude over baseline planners within sufficiently large domains. Concurrently, our grounding models for generalization infer elements of the semantic representation that are subsequently combined to form a complete goal description, enabling the interpretation of commands involving novel combinations never seen during training. Taken together, our results show that the design of semantic goal representation has powerful implications for the accuracy, efficiency, and generalization capabilities of language grounding models.
This work is supported by the National Science Foundation under Grant Number IIS-1637614, the US Army/DARPA under Grant Number W911NF-15-1-0503, and the National Aeronautics and Space Administration under Grant Number NNX16AR61G.
Lawson L.S. Wong was supported by a Croucher Foundation Fellowship.
- Artzi, Y, & Zettlemoyer, L. (2013). Weakly supervised learning of semantic parsers for mapping instructions to actions. In Annual meeting of the association for computational linguistics.Google Scholar
- Arumugam, D., Karamcheti, S., Gopalan, N., Wong, L., & Tellex, S. (2017). Accurately and efficiently interpreting human–robot instructions of varying granularities. In Robotics: Science and systems XIII. https://doi.org/10.15607/rss.2017.xiii.056.
- Brown, P. F., Cocke, J., Pietra, S. D., Pietra, V. J. D., Jelinek, F., Lafferty, J. D., et al. (1990). A statistical approach to machine translation. Computational Linguistics, 16, 79–85.Google Scholar
- Brown, P. F., Pietra, S. D., Pietra, V. J. D., & Mercer, R. L. (1993). The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19, 263–311.Google Scholar
- Chen, D. L., & Mooney, R. J. (2011). Learning to interpret natural language navigation instructions from observations. In AAAI Conference on artificial intelligence.Google Scholar
- Cho, K., van Merriënboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1724–1734). Doha, Qatar: Association for Computational Linguistics. http://www.aclweb.org/anthology/D14-1179.
- Chung, J., Gülçehre, Ç., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. In Presented at the deep learning workshop at NIPS2014. arXiv:1412.3555.
- Diuk, C., Cohen, A., & Littman, M. L. (2008). An object-oriented representation for efficient reinforcement learning. In International conference on machine learning.Google Scholar
- Dzifcak, J., Scheutz, M., Baral, C., & Schermerhorn, P. (2009). What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution. In IEEE international conference on robotics and automation.Google Scholar
- Google. (2017). Google Speech API. https://cloud.google.com/speech/. Accessed 30 January, 2017.
- Gopalan, N., desJardins, M., Littman, M. L., MacGlashan, J., Squire, S., Tellex, S., Winder, R. J., & Wong, L. L. S. (2017). Planning with abstract Markov decision processes. In International conference on automated planning and scheduling.Google Scholar
- Howard, T. M., Tellex, S., & Roy, N. (2014). A natural language planner interface for mobile manipulators. In IEEE International conference on robotics and automation.Google Scholar
- Iyyer, M., Manjunatha, V., Boyd-Graber, J. L., Daumé, H. (2015). Deep unordered composition rivals syntactic methods for text classification. In Annual meeting of the association for computational linguistics.Google Scholar
- Jong, N. K., & Stone, P. (2008). Hierarchical model-based reinforcement learning: R-max + MAXQ. In International conference on machine learning.Google Scholar
- Junghanns, A., & Schaeeer, J. (1997). Sokoban: A challenging single-agent search problem. In International joint conference on artificial intelligence workshop on using games as an experimental testbed for AI reasearch.Google Scholar
- Karamcheti, S., Williams, E. C., Arumugam, D., Rhee, M., Gopalan, N., Wong, L. L. S., & Tellex, S. (2017). A tale of two DRAGGNs: A hybrid approach for interpreting action-oriented and goal-oriented instructions. In Annual meeting of the association for computational linguistics workshop on language grounding for robotics.Google Scholar
- Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. CoRR. arxiv:1412.6980.
- MacGlashan, J., Babeş-Vroman, M., desJardins, M., Littman, M., Muresan, S., Squire, S., et al. (2015). Grounding english commands to reward functions. In Proceedings of robotics: Science and systems. https://doi.org/10.15607/RSS.2015.XI.018.
- MacMahon, M., Stankiewicz, B., & Kuipers, B. (2006). Walk the talk: Connecting language, knowledge, and action in route instructions. In National conference on artificial intelligence.Google Scholar
- Matuszek, C., Herbst, E., Zettlemoyer, L., & Fox, D. (2012). Learning to parse natural language commands to a robot control system. In International symposium on experimental robotics.Google Scholar
- McGovern, A., Sutton, R. S., & Fagg, A. H. (1997). Roles of macro-actions in accelerating reinforcement learning. In Grace Hopper celebration of women in computing (pp. 13–18).Google Scholar
- McMahan, H. ., Likhachev, M., & Gordon, G. J. (2005). Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees. In International conference on machine learning.Google Scholar
- Mikolov, T., Karafiát, M., Burget, L., Cernocký, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In T. Kobayashi, K. Hirose, & S. Nakamura, INTERSPEECH 2010, 11th Annual conference of the international speech communication association, Makuhari, Chiba, Japan (pp. 1045–1048). ISCA. http://www.isca-speech.org/archive/interspeech_2010/i10_1045.html.
- Mikolov, T., Kombrink, S., Burget, L., Cernocký, J., & Khudanpur, S. (2011). Extensions of recurrent neural network language model. In IEEE international conference on acoustics, speech, and signal processing.Google Scholar
- Mikolov, T., Chen, K., Corrado, G. S., & Dean, J. (2013). Efficient estimation of word representations in vector space. CoRR. arxiv:1301.3781.
- Ng, A. Y., & Russell, S. (2000). Algorithms for inverse reinforcement learning. In International conference on machine learning.Google Scholar
- Paul, R., Arkin, J., Roy, N., & Howard, T. M. (2016). Efficient grounding of abstract spatial concepts for natural language interaction with robot manipulators. In Proceedings of robotics: Science and systems. https://doi.org/10.15607/RSS.2016.XII.037.
- Quigley, M., Faust, J., Foote, T., & Leibs, J. (2009). ROS: an open-source robot operating system. In IEEE international conference on robotics and automation workshop on open source software.Google Scholar
- Reed, S. E., & de Freitas, N. (2016). Neural programmer-interpreters. In International conference on learning representations.Google Scholar
- Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Proceedings of the 27th international conference on neural information processing systems, NIPS’14, Montreal, Canada (Vol. 2, pp. 3104–3112). Cambridge, MA: MIT Press. http://dl.acm.org/citation.cfm?id=2969033.2969173
- Tellex, S., Kollar, T., Dickerson, S., Walter, M. R., Banerjee, A. G., Teller, S., & Roy, N. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI conference on artificial intelligence.Google Scholar
- Winograd, T. (1971). Procedures as a representation for data in a computer program for understanding natural language. Technical report, Artificial Intelligence Laboratory, Massachusetts Institute of Technology.Google Scholar
- Yamada, T., Murata, S., Arie, H., & Ogata, T. (2016). Dynamical linking of positive and negative sentences to goal-oriented robot behavior by hierarchical RNN. In International conference on artificial neural networks.Google Scholar
- Zelle, J. M., & Mooney, R. J. (1996) Learning to parse database queries using inductive logic programming. In National conference on artificial intelligence.Google Scholar
- Zettlemoyer, L. S., & Collins, M. (2005). Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the twenty-first conference on uncertainty in artificial intelligence (UAI-05) (pp. 658–666). Arlington, VA: AUAI Press. https://dslpitt.org/uai/displayArticleDetails.jsp?mmnu=1&smnu=2&article_id=1209&proceeding_id=21.