Abstract
Vision-language navigation (VLN), in which an agent follows language instruction in a visual environment, has been studied under the premise that the input command is fully feasible in the environment. Yet in practice, a request may not be possible due to language ambiguity or environment changes. To study VLN with unknown command feasibility, we introduce a new dataset Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete a natural language command in a mobile app. Mobile apps provide a scalable domain to study real downstream uses of VLN methods. Moreover, mobile app commands provide instruction for interactive navigation, as they result in action sequences with state changes via clicking, typing, or swiping. MoTIF is the first to include feasibility annotations, containing both binary feasibility labels and fine-grained labels for why tasks are unsatisfiable. We further collect follow-up questions for ambiguous queries to enable research on task uncertainty resolution. Equipped with our dataset, we propose the new problem of feasibility prediction, in which a natural language instruction and multimodal app environment are used to predict command feasibility. MoTIF provides a more realistic app dataset as it contains many diverse environments, high-level goals, and longer action sequences than prior work. We evaluate interactive VLN methods using MoTIF, quantify the generalization ability of current approaches to new app environments, and measure the effect of task feasibility on navigation performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
References
Ahmed, T., Hoyle, R., Connelly, K., Crandall, D., Kapadia, A.: Privacy concerns and behaviors of people with visual impairments. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, pp. 3523–3532. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2702123.2702334
Akter, T., Dosono, B., Ahmed, T., Kapadia, A., Semaan, B.C.: “I am uncomfortable sharing what I can’t see”: privacy concerns of the visually impaired with camera based assistive applications. In: USENIX Security Symposium (2020)
Anderson, P., et al.: Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Appalaraju, S., Jasani, B., Kota, B.U., Xie, Y., Manmatha, R.: Docformer: end-to-end transformer for document understanding. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Blukis, V., Paxton, C., Fox, D., Garg, A., Artzi, Y.: A persistent spatial semantic representation for high-level natural language instruction execution (2021)
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)
Conneau, A., Lample, G., Ranzato, M., Denoyer, L., Jégou, H.: Word translation without parallel data. In: International Conference on Learning Representations (ICLR) (2018)
Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., Batra, D.: Embodied question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Deka, B., et al.: Rico: a mobile app dataset for building data-driven design applications. In: 30th Annual Symposium on User Interface Software and Technology (UIST) (2017)
Deka, B., Huang, Z., Kumar, R.: Erica: Interaction mining mobile apps. In: 29th Annual Symposium on User Interface Software and Technology (UIST) (2016)
Dosovitskiy, A., et al.: An image is worth 16 x 16 words: transformers for image recognition at scale (2021)
Gardner, R., Varma, M., Zhu, C., Krishna, R.: Determining question-answer plausibility in crowdsourced datasets using multi-task learning. In: W-NUT@EMNLP (2020)
Gordon, D., Kembhavi, A., Rastegari, M., Redmon, J., Fox, D., Farhadi, A.: IQA: visual question answering in interactive environments. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4089–4098 (2018). https://doi.org/10.1109/CVPR.2018.00430
Gurari, D., et al.: Vizwiz grand challenge: answering visual questions from blind people. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Irshad, M.Z., Ma, C.Y., Kira, Z.: Hierarchical cross-modal agent for robotics vision-and-language navigation. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2021). https://arxiv.org/abs/2104.10674
Jia, C., et al.: Scaling up visual and vision-language representation learning with noisy text supervision (2021)
Ku, A., Anderson, P., Patel, R., Ie, E., Baldridge, J.: Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4392–4412. Association for Computational Linguistics, November 2020. https://doi.org/10.18653/v1/2020.emnlp-main.356, https://aclanthology.org/2020.emnlp-main.356
Li, P., et al.: Selfdoc: self-supervised document representation learning. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Li, T.J.J., Azaria, A., Myers, B.A.: Sugilite: creating multimodal smartphone automation by demonstration. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 6038–6049. Association for Computing Machinery, New York (2017)
Li, T.J.J., Chen, J., Xia, H., Mitchell, T.M., Myers, B.A.: Multi-modal repairs of conversational breakdowns in task-oriented dialogs, pp. 1094–1107. Association for Computing Machinery, New York (2020)
Li, T.J.-J., Mitchell, T.M., Myers, B.A.: Demonstration + natural language: multimodal interfaces for GUI-based interactive task learning agents. In: Li, Y., Hilliges, O. (eds.) Artificial Intelligence for Human Computer Interaction: A Modern Approach. HIS, pp. 495–537. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82681-9_15
Li, T.J.J., Popowski, L., Mitchell, T.M., Myers, B.A.: Screen2vec: semantic embedding of GUI screens and GUI components. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2021 (2021)
Li, Y., He, J., Zhou, X., Zhang, Y., Baldridge, J.: Mapping natural language instructions to mobile UI action sequences. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8198–8210. Association for Computational Linguistics, July 2020. https://doi.org/10.18653/v1/2020.acl-main.729, https://www.aclweb.org/anthology/2020.acl-main.729
Li, Y., Li, G., Zhou, X., Dehghani, M., Gritsenko, A.A.: VUT: versatile UI transformer for multi-modal multi-task user interface modeling. CoRR abs/2112.05692 (2021). https://arxiv.org/abs/2112.05692
Liu, T.F., Craft, M., Situ, J., Yumer, E., Mech, R., Kumar, R.: Learning design semantics for mobile apps. In: 31st Annual Symposium on User Interface Software and Technology (UIST) (2018)
Lloyd, S.: Least squares quantization in PCM. In: IEEE Transactions on Information Theory (1982)
van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008). http://www.jmlr.org/papers/v9/vandermaaten08a.html
Massiceti, D., Dokania, P.K., Siddharth, N., Torr, P.H.S.: Visual dialogue without vision or dialogue. CoRR abs/1812.06417 (2018). http://arxiv.org/abs/1812.06417
Min, S.Y., Chaplot, D.S., Ravikumar, P., Bisk, Y., Salakhutdinov, R.: Film: following instructions in language with modular methods (2021)
Nguyen, K., Daumé III, H.: Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), November 2019
Pasupat, P., Jiang, T.S., Liu, E.Z., Guu, K., Liang, P.: Mapping natural language commands to web elements. In: Empirical Methods in Natural Language Processing (EMNLP) (2018)
Puig, X., et al.: Virtualhome: simulating household activities via programs. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8494–8502. IEEE Computer Society, Los Alamitos, CA, USA, June 2018. https://doi.org/10.1109/CVPR.2018.00886, https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00886
Radford, A., et al.: Learning transferable visual models from natural language supervision. CoRR abs/2103.00020 (2021). https://arxiv.org/abs/2103.00020
Ray, A., Christie, G., Bansal, M., Batra, D., Parikh, D.: Question relevance in VQQ: identifying non-visual and false-premise questions (2016)
Shi, T., Karpathy, A., Fan, L., Hernandez, J., Liang, P.: World of bits: An open-domain platform for web-based agents. In: 34th International Conference on Machine Learning (ICML) (2015)
Shridhar, M., et al.: ALFRED: a benchmark for interpreting grounded instructions for everyday tasks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020). https://arxiv.org/abs/1912.01734
Singh, K.P., Bhambri, S., Kim, B., Mottaghi, R., Choi, J.: Factorizing perception and policy for interactive instruction following. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. CoRR abs/1707.02968 (2017). http://arxiv.org/abs/1707.02968
Vaswani, A., et al.: Attention is all you need. In: Conference on Neural Information Processing Systems (NeurIPS) (2017)
Vtyurina, A., Fourney, A., Morris, M.R., Findlater, L., White, R.W.: Bridging screen readers and voice assistants for enhanced eyes-free web search. In: International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) (2019)
Yamaguchi, K.: Canvasvae: learning to generate vector graphic documents. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Zhu, F., Zhu, Y., Chang, X., Liang, X.: Vision-language navigation with self-supervised auxiliary reasoning tasks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10009–10019 (2020). https://doi.org/10.1109/CVPR42600.2020.01003
Zhu, Y., et al.: Visual semantic planning using deep successor representations. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Acknowledgements
This work is funded in part by Boston University, the Google Ph.D. Fellowship program, the MIT-IBM Watson AI Lab, the Google Faculty Research Award and NSF Grant IIS-1750563.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Burns, A., Arsan, D., Agrawal, S., Kumar, R., Saenko, K., Plummer, B.A. (2022). A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13668. Springer, Cham. https://doi.org/10.1007/978-3-031-20074-8_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-20074-8_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20073-1
Online ISBN: 978-3-031-20074-8
eBook Packages: Computer ScienceComputer Science (R0)