LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation

Yue Zhang, Parisa Kordjamshidi


Abstract
Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions. The current Transformer-based VLN agents entangle the orientation and vision information, which limits the gain from the learning of each information source. In this paper, we design a neural agent with explicit Orientation and Vision modules. Those modules learn to ground spatial information and landmark mentions in the instructions to the visual environment more effectively. To strengthen the spatial reasoning and visual perception of the agent, we design specific pre-training tasks to feed and better utilize the corresponding modules in our final navigation model. We evaluate our approach on both Room2room (R2R) and Room4room (R4R) datasets and achieve the state of the art results on both benchmarks.
Anthology ID:
2022.coling-1.505
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5745–5754
Language:
URL:
https://aclanthology.org/2022.coling-1.505
DOI:
Bibkey:
Cite (ACL):
Yue Zhang and Parisa Kordjamshidi. 2022. LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5745–5754, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation (Zhang & Kordjamshidi, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.505.pdf
Code
 hlr/lovis