From Rewriting to Remembering: Common Ground for Conversational QA Models

Marco Del Tredici, Xiaoyu Shen, Gianni Barlacchi, Bill Byrne, Adrià de Gispert


Abstract
In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.
Anthology ID:
2022.nlp4convai-1.7
Volume:
Proceedings of the 4th Workshop on NLP for Conversational AI
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Bing Liu, Alexandros Papangelis, Stefan Ultes, Abhinav Rastogi, Yun-Nung Chen, Georgios Spithourakis, Elnaz Nouri, Weiyan Shi
Venue:
NLP4ConvAI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
70–76
Language:
URL:
https://aclanthology.org/2022.nlp4convai-1.7
DOI:
10.18653/v1/2022.nlp4convai-1.7
Bibkey:
Cite (ACL):
Marco Del Tredici, Xiaoyu Shen, Gianni Barlacchi, Bill Byrne, and Adrià de Gispert. 2022. From Rewriting to Remembering: Common Ground for Conversational QA Models. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 70–76, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
From Rewriting to Remembering: Common Ground for Conversational QA Models (Del Tredici et al., NLP4ConvAI 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.nlp4convai-1.7.pdf
Video:
 https://aclanthology.org/2022.nlp4convai-1.7.mp4
Data
QReCC