R2H: Building Multimodal Navigation Helpers that Respond to Help Requests

Yue Fan, Jing Gu, Kaizhi Zheng, Xin Wang


Abstract
Intelligent navigation-helper agents are critical as they can navigate users in unknown areas through environmental awareness and conversational ability, serving as potential accessibility tools for individuals with disabilities. In this work, we first introduce a novel benchmark, Respond to Help Requests (R2H), to promote the development of multi-modal navigation helpers capable of responding to requests for help, utilizing existing dialog-based embodied datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH), which assesses the helper agent’s ability to generate informative responses based on a given dialog history, and (2) Respond during Interaction (RdI), which evaluates the effectiveness and efficiency of the response during consistent cooperation with a task performer. Furthermore, we explore two approaches to construct the navigation-helper agent, including fine-tuning a novel task-oriented multi-modal response generation model that can see and respond, named SeeRee, and employing a multi-modal large language model in a zero-shot manner. Analysis of the task and method was conducted based on both automatic benchmarking and human evaluations.
Anthology ID:
2023.emnlp-main.915
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14803–14819
Language:
URL:
https://aclanthology.org/2023.emnlp-main.915
DOI:
10.18653/v1/2023.emnlp-main.915
Bibkey:
Cite (ACL):
Yue Fan, Jing Gu, Kaizhi Zheng, and Xin Wang. 2023. R2H: Building Multimodal Navigation Helpers that Respond to Help Requests. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14803–14819, Singapore. Association for Computational Linguistics.
Cite (Informal):
R2H: Building Multimodal Navigation Helpers that Respond to Help Requests (Fan et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.915.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.915.mp4