Sub-Instruction Aware Vision-and-Language Navigation

Yicong Hong, Cristian Rodriguez, Qi Wu, Stephen Gould


Abstract
Vision-and-language navigation requires an agent to navigate through a real 3D environment following natural language instructions. Despite significant advances, few previous works are able to fully utilize the strong correspondence between the visual and textual sequences. Meanwhile, due to the lack of intermediate supervision, the agent’s performance at following each part of the instruction cannot be assessed during navigation. In this work, we focus on the granularity of the visual and language sequences as well as the traceability of agents through the completion of an instruction. We provide agents with fine-grained annotations during training and find that they are able to follow the instruction better and have a higher chance of reaching the target at test time. We enrich the benchmark dataset Room-to-Room (R2R) with sub-instructions and their corresponding paths. To make use of this data, we propose effective sub-instruction attention and shifting modules that select and attend to a single sub-instruction at each time-step. We implement our sub-instruction modules in four state-of-the-art agents, compare with their baseline models, and show that our proposed method improves the performance of all four agents. We release the Fine-Grained R2R dataset (FGR2R) and the code at https://github.com/YicongHong/Fine-Grained-R2R.
Anthology ID:
2020.emnlp-main.271
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3360–3376
Language:
URL:
https://aclanthology.org/2020.emnlp-main.271
DOI:
10.18653/v1/2020.emnlp-main.271
Bibkey:
Cite (ACL):
Yicong Hong, Cristian Rodriguez, Qi Wu, and Stephen Gould. 2020. Sub-Instruction Aware Vision-and-Language Navigation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3360–3376, Online. Association for Computational Linguistics.
Cite (Informal):
Sub-Instruction Aware Vision-and-Language Navigation (Hong et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.271.pdf
Video:
 https://slideslive.com/38938820
Code
 YicongHong/Fine-Grained-R2R
Data
Fine-Grained R2R