%0 Conference Proceedings %T Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation %A Zhu, Wanrong %A Wang, Xin %A Fu, Tsu-Jui %A Yan, An %A Narayana, Pradyumna %A Sone, Kazoo %A Basu, Sugato %A Wang, William Yang %Y Merlo, Paola %Y Tiedemann, Jorg %Y Tsarfaty, Reut %S Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume %D 2021 %8 April %I Association for Computational Linguistics %C Online %F zhu-etal-2021-multimodal %X One of the most challenging topics in Natural Language Processing (NLP) is visually-grounded language understanding and reasoning. Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates in real-life urban environments. With the lack of human-annotated instructions that illustrate the intricate urban scenes, outdoor VLN remains a challenging task to solve. In this paper, we introduce a Multimodal Text Style Transfer (MTST) learning approach and leverage external multimodal resources to mitigate data scarcity in outdoor navigation tasks. We first enrich the navigation data by transferring the style of the instructions generated by Google Maps API, then pre-train the navigator with the augmented external outdoor navigation dataset. Experimental results show that our MTST learning approach is model-agnostic, and our MTST approach significantly outperforms the baseline models on the outdoor VLN task, improving task completion rate by 8.7% relatively on the test set. %R 10.18653/v1/2021.eacl-main.103 %U https://aclanthology.org/2021.eacl-main.103 %U https://doi.org/10.18653/v1/2021.eacl-main.103 %P 1207-1221