Integrating Question Rewrites in Conversational Question Answering: A Reinforcement Learning Approach

Etsuko Ishii, Bryan Wilie, Yan Xu, Samuel Cahyawijaya, Pascale Fung


Abstract
Resolving dependencies among dialogue history is one of the main obstacles in the research on conversational question answering (QA). The conversational question rewrites (QR) task has been shown to be effective to solve this problem by reformulating questions in a self-contained form. However, QR datasets are limited and existing methods tend to depend on the assumption of the existence of corresponding QR datasets for every CQA dataset.This paper proposes a reinforcement learning approach that integrates QR and CQA tasks without corresponding labeled QR datasets. We train a QR model based on the reward signal obtained from the CQA, and the experimental results show that our approach can bring improvement over the pipeline approaches.
Anthology ID:
2022.acl-srw.6
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–66
Language:
URL:
https://aclanthology.org/2022.acl-srw.6
DOI:
10.18653/v1/2022.acl-srw.6
Bibkey:
Cite (ACL):
Etsuko Ishii, Bryan Wilie, Yan Xu, Samuel Cahyawijaya, and Pascale Fung. 2022. Integrating Question Rewrites in Conversational Question Answering: A Reinforcement Learning Approach. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 55–66, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Integrating Question Rewrites in Conversational Question Answering: A Reinforcement Learning Approach (Ishii et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-srw.6.pdf
Data
CANARDCoQAQReCCQuAC