Multi-Modal Open-Domain Dialogue

Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston


Abstract
Recent work in open-domain conversational agents has demonstrated that significant improvements in humanness and user preference can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of getting humans to engage in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to human preference.
Anthology ID:
2021.emnlp-main.398
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4863–4883
Language:
URL:
https://aclanthology.org/2021.emnlp-main.398
DOI:
10.18653/v1/2021.emnlp-main.398
Bibkey:
Cite (ACL):
Kurt Shuster, Eric Michael Smith, Da Ju, and Jason Weston. 2021. Multi-Modal Open-Domain Dialogue. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4863–4883, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Multi-Modal Open-Domain Dialogue (Shuster et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.398.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.398.mp4
Data
Blended Skill TalkCOCO CaptionsConvAI2EmpatheticDialoguesImage-ChatWizard of Wikipedia