ULN: Towards Underspecified Vision-and-Language Navigation

Weixi Feng, Tsu-Jui Fu, Yujie Lu, William Yang Wang


Abstract
Vision-and-Language Navigation (VLN) is a task to guide an embodied agent moving to a target position using language instructions. Despite the significant performance improvement, the wide use of fine-grained instructions fails to characterize more practical linguistic variations in reality. To fill in this gap, we introduce a new setting, namely Underspecified vision-and-Language Navigation (ULN), and associated evaluation datasets. ULN evaluates agents using multi-level underspecified instructions instead of purely fine-grained or coarse-grained, which is a more realistic and general setting. As a primary step toward ULN, we propose a VLN framework that consists of a classification module, a navigation agent, and an Exploitation-to-Exploration (E2E) module. Specifically, we propose to learn Granularity Specific Sub-networks (GSS) for the agent to ground multi-level instructions with minimal additional parameters. Then, our E2E module estimates grounding uncertainty and conducts multi-step lookahead exploration to improve the success rate further. Experimental results show that existing VLN models are still brittle to multi-level language underspecification. Our framework is more robust and outperforms the baselines on ULN by ~10% relative success rate across all levels.
Anthology ID:
2022.emnlp-main.429
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6394–6412
Language:
URL:
https://aclanthology.org/2022.emnlp-main.429
DOI:
10.18653/v1/2022.emnlp-main.429
Bibkey:
Cite (ACL):
Weixi Feng, Tsu-Jui Fu, Yujie Lu, and William Yang Wang. 2022. ULN: Towards Underspecified Vision-and-Language Navigation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6394–6412, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
ULN: Towards Underspecified Vision-and-Language Navigation (Feng et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.429.pdf