Advertisement

Object-and-Action Aware Model for Visual Language Navigation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12355)

Abstract

Vision-and-Language Navigation (VLN) is unique in that it requires turning relatively general natural-language instructions into robot agent actions, on the basis of visible environments. This requires to extract value from two very different types of natural-language information. The first is object description (e.g., ‘table’, ‘door’), each presenting as a tip for the agent to determine the next action by finding the item visible in the environment, and the second is action specification (e.g., ‘go straight’, ‘turn left’) which allows the robot to directly predict the next movements without relying on visual perceptions. However, most existing methods pay few attention to distinguish these information from each other during instruction encoding and mix together the matching between textual object/action encoding and visual perception/orientation features of candidate viewpoints. In this paper, we propose an Object-and-Action Aware Model (OAAM) that processes these two different forms of natural language based instruction separately. This enables each process to match object-centered/action-centered instruction to their own counterpart visual perception/action orientation flexibly. However, one side-issue caused by above solution is that an object mentioned in instructions may be observed in the direction of two or more candidate viewpoints, thus the OAAM may not predict the viewpoint on the shortest path as the next action. To handle this problem, we design a simple but effective path loss to penalize trajectories deviating from the ground truth path. Experimental results demonstrate the effectiveness of the proposed model and path loss, and the superiority of their combination with a \(50\%\) SPL score on the R2R dataset and a \(40\%\) CLS score on the R4R dataset in unseen environments, outperforming the previous state-of-the-art.

Keywords

Vision-and-Language Navigation Modular network Reward shaping 

References

  1. 1.
    Anderson, P., et al.: On evaluation of embodied navigation agents. CoRR abs/1807.06757 (2018)
  2. 2.
    Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR, pp. 3674–3683 (2018)Google Scholar
  3. 3.
    Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Learning to compose neural networks for question answering. In: NAACL, pp. 1545–1554 (2016)Google Scholar
  4. 4.
    Fried, D., et al.: Speaker-follower models for vision-and-language navigation. In: NeurIPS, pp. 3318–3329 (2018)Google Scholar
  5. 5.
    Fu, T., Wang, X., Peterson, M., Grafton, S., Eckstein, M.P., Wang, W.Y.: Counterfactual vision-and-language navigation via adversarial path sampling. CoRR abs/1911.07308 (2019)
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  7. 7.
    Hu, R., Andreas, J., Rohrbach, M., Darrell, T., Saenko, K.: Learning to reason: end-to-end module networks for visual question answering. In: ICCV, pp. 804–813 (2017)Google Scholar
  8. 8.
    Hu, R., Fried, D., Rohrbach, A., Klein, D., Darrell, T., Saenko, K.: Are you looking? Grounding to multiple modalities in vision-and-language navigation. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, Long Papers, ACL 2019, Florence, Italy, 28 July–2 August 2019, vol. 1, pp. 6551–6557 (2019)Google Scholar
  9. 9.
    Hu, R., Rohrbach, M., Andreas, J., Darrell, T., Saenko, K.: Modeling relationships in referential expressions with compositional modular networks. In: CVPR, pp. 4418–4427 (2017)Google Scholar
  10. 10.
    Huang, H., et al.: Transferable representation learning in vision-and-language navigation. In: ICCV, pp. 7403–7412 (2019)Google Scholar
  11. 11.
    Ilharco, G., Jain, V., Ku, A., Ie, E., Baldridge, J.: General evaluation for instruction conditioned navigation using dynamic time warping. In: Visually Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Vancouver, Canada, 13 December 2019 (2019)Google Scholar
  12. 12.
    Jain, V., Magalhães, G., Ku, A., Vaswani, A., Ie, E., Baldridge, J.: Stay on the path: instruction fidelity in vision-and-language navigation. In: ACL, pp. 1862–1872 (2019)Google Scholar
  13. 13.
    Ke, L., et al.: Tactical rewind: self-correction via backtracking in vision-and-language navigation. In: CVPR, pp. 6741–6749 (2019)Google Scholar
  14. 14.
    Landi, F., Baraldi, L., Cornia, M., Corsini, M., Cucchiara, R.: Perceive, transform, and act: multi-modal attention networks for vision-and-language navigation. CoRR abs/1911.12377 (2019)
  15. 15.
    Li, X., et al.: Robust navigation with language pretraining and stochastic sampling. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) EMNLP, pp. 1494–1499 (2019)Google Scholar
  16. 16.
    Ma, C.Y., et al.: Self-monitoring navigation agent via auxiliary progress estimation. In: ICLR (2019)Google Scholar
  17. 17.
    Ma, C., Wu, Z., AlRegib, G., Xiong, C., Kira, Z.: The regretful agent: heuristic-aided navigation through progress estimation. In: CVPR, pp. 6732–6740 (2019)Google Scholar
  18. 18.
    Tan, H., Yu, L., Bansal, M.: Learning to navigate unseen environments: back translation with environmental dropout. In: NAACL, pp. 2610–2621 (2019)Google Scholar
  19. 19.
    Wang, X., et al.: Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In: CVPR, pp. 6629–6638 (2019)Google Scholar
  20. 20.
    Wang, X., Jain, V., Ie, E., Wang, W.Y., Kozareva, Z., Ravi, S.: Environment-agnostic multitask learning for natural language grounded navigation. CoRR abs/2003.00443 (2020)
  21. 21.
    Wang, X., Xiong, W., Wang, H., Wang, W.Y.: Look before you leap: bridging model-free and model-based reinforcement learning for planned-ahead vision-and-language navigation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 38–55. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01270-0_3CrossRefGoogle Scholar
  22. 22.
    Yu, L., et al.: MattNet: modular attention network for referring expression comprehension. In: CVPR, pp. 1307–1315 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.The University of AdelaideAdelaideAustralia
  2. 2.Harbin Institute of TechnologyWeihaiChina
  3. 3.Aipixel Inc.ShenzhenChina

Personalised recommendations