SLDT: Sequential Latent Document Transformer for Multilingual Document-based Dialogue

Zhanyu Ma, Zeming Liu, Jian Ye


Abstract
Multilingual document-grounded dialogue, where the system is required to generate responses based on both the conversation Multilingual context and external knowledge sources. Traditional pipeline methods for knowledge identification and response generation, while effective in certain scenarios, suffer from error propagation issues and fail to capture the interdependence between these two sub-tasks. To overcome these challenges, we propose the application of the SLDT method, which treats passage-knowledge selection as a sequential decision process rather than a single-step decision process. We achieved winner 3rd in dialdoc 2023 and we also validated the effectiveness of our method on other datasets. The ablation experiment also shows that our method significantly improves the basic model compared to other methods.
Anthology ID:
2023.dialdoc-1.7
Volume:
Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Smaranda Muresan, Vivian Chen, Kennington Casey, Vandyke David, Dethlefs Nina, Inoue Koji, Ekstedt Erik, Ultes Stefan
Venue:
dialdoc
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
57–67
Language:
URL:
https://aclanthology.org/2023.dialdoc-1.7
DOI:
10.18653/v1/2023.dialdoc-1.7
Bibkey:
Cite (ACL):
Zhanyu Ma, Zeming Liu, and Jian Ye. 2023. SLDT: Sequential Latent Document Transformer for Multilingual Document-based Dialogue. In Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 57–67, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
SLDT: Sequential Latent Document Transformer for Multilingual Document-based Dialogue (Ma et al., dialdoc 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.dialdoc-1.7.pdf