Building Goal-oriented Document-grounded Dialogue Systems

Xi Chen, Faner Lin, Yeju Zhou, Kaixin Ma, Jonathan Francis, Eric Nyberg, Alessandro Oltramari


Abstract
In this paper, we describe our systems for solving the two Doc2Dial shared task: knowledge identification and response generation. We proposed several pre-processing and post-processing methods, and we experimented with data augmentation by pre-training the models on other relevant datasets. Our best model for knowledge identification outperformed the baseline by 10.5+ f1-score on the test-dev split, and our best model for response generation outperformed the baseline by 11+ Sacrebleu score on the test-dev split.
Anthology ID:
2021.dialdoc-1.14
Volume:
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP | dialdoc
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
109–112
Language:
URL:
https://aclanthology.org/2021.dialdoc-1.14
DOI:
10.18653/v1/2021.dialdoc-1.14
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.dialdoc-1.14.pdf