Triplet-Free Knowledge-Guided Response Generation

Dongming Li, Jianfeng Liu, Baoyuan Wang


Abstract
Generating vivid and informative responses (e.g., comments for social posts and utterances for dialogues) is challenging without giving relevant knowledge. Prior works focus on constructing the ”latent” knowledge first and then learning how to ”ground” it based on pseudo (context, knowledge, response) triplets. However, the retrieval between real responses and their latent knowledge is difficult in nature. In this paper, instead of focusing on how to ground knowledge given the responses, we take a different perspective to optimize the final responses for given guided knowledge directly. This allows us to re-formulate the entire problem in a simplified yet more scalable way. Specifically, we pretrain a response language model (LM) to measure the relevance and consistency between any context and response, then use search engines to collect the top-ranked passages to serve as the guiding knowledge without explicitly optimizing the ‘‘best” latent knowledge that corresponds to a given response. The final response generation model is trained through reinforcement learning by taking both the response LM prior and knowledge-injection rate as rewards. For better evaluations, we construct a new Chinese benchmark, ”IceKC”, using fresh multimodal online social posts. Both automatic evaluations and human evaluations show our zero-resource approach performs significantly better than prior works.
Anthology ID:
2023.findings-acl.815
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12881–12899
Language:
URL:
https://aclanthology.org/2023.findings-acl.815
DOI:
10.18653/v1/2023.findings-acl.815
Bibkey:
Cite (ACL):
Dongming Li, Jianfeng Liu, and Baoyuan Wang. 2023. Triplet-Free Knowledge-Guided Response Generation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12881–12899, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Triplet-Free Knowledge-Guided Response Generation (Li et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.815.pdf