Reason first, then respond: Modular Generation for Knowledge-infused Dialogue

Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, Jason Weston


Abstract
Large language models can produce fluent dialogue but often hallucinate factual inaccuracies. While retrieval-augmented models help alleviate this issue, they still face a difficult challenge of both reasoning to provide correct knowledge and generating conversation simultaneously. In this work, we propose a modular model, Knowledge to Response (K2R), for incorporating knowledge into conversational agents, which breaks down this problem into two easier steps. K2R first generates a knowledge sequence, given a dialogue context, as an intermediate step. After this “reasoning step”, the model then attends to its own generated knowledge sequence, as well as the dialogue context, to produce a final response. In detailed experiments, we find that such a model hallucinates less in knowledge-grounded dialogue tasks, and has advantages in terms of interpretability and modularity. In particular, it can be used to fuse QA and dialogue systems together to enable dialogue agents to give knowledgeable answers, or QA models to give conversational responses in a zero-shot setting.
Anthology ID:
2022.findings-emnlp.527
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7112–7132
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.527
DOI:
10.18653/v1/2022.findings-emnlp.527
Bibkey:
Cite (ACL):
Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2022. Reason first, then respond: Modular Generation for Knowledge-infused Dialogue. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 7112–7132, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Reason first, then respond: Modular Generation for Knowledge-infused Dialogue (Adolphs et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.527.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.527.mp4