Conversational Question Answering with Language Models Generated Reformulations over Knowledge Graph

Lihui Liu, Blaine Hill, Boxin Du, Fei Wang, Hanghang Tong


Abstract
Conversational question answering (ConvQA) over knowledge graphs (KGs) involves answering multi-turn natural language questions about information contained in a KG. State-of-the-art methods of ConvQA often struggle with inexplicit question-answer pairs. These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CoRnNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. CoRnNet adopts a teacher-student architecture where a teacher model learns question representations using human writing reformulations, and a student model to mimic the teacher model’s output via reformulations generated by LLMs. The learned question representation is then used by a RL model to locate the correct answer in a KG. Extensive experimental results show that CoRnNet outperforms state-of-the-art ConvQA models.
Anthology ID:
2024.findings-acl.48
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
839–850
Language:
URL:
https://aclanthology.org/2024.findings-acl.48
DOI:
Bibkey:
Cite (ACL):
Lihui Liu, Blaine Hill, Boxin Du, Fei Wang, and Hanghang Tong. 2024. Conversational Question Answering with Language Models Generated Reformulations over Knowledge Graph. In Findings of the Association for Computational Linguistics ACL 2024, pages 839–850, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Conversational Question Answering with Language Models Generated Reformulations over Knowledge Graph (Liu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.48.pdf