Improving End-to-End Task-Oriented Dialog System with A Simple Auxiliary Task

Yohan Lee


Abstract
The paradigm of leveraging large pre-trained language models has made significant progress on benchmarks on task-oriented dialogue (TOD) systems. In this paper, we combine this paradigm with multi-task learning framework for end-to-end TOD modeling by adopting span prediction as an auxiliary task. In end-to-end setting, our model achieves new state-of-the-art results with combined scores of 108.3 and 107.5 on MultiWOZ 2.0 and MultiWOZ 2.1, respectively. Furthermore, we demonstrate that multi-task learning improves not only the performance of model but its generalization capability through domain adaptation experiments in the few-shot setting. The code is available at github.com/bepoetree/MTTOD.
Anthology ID:
2021.findings-emnlp.112
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
EMNLP | Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1296–1303
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.112
DOI:
10.18653/v1/2021.findings-emnlp.112
Bibkey:
Cite (ACL):
Yohan Lee. 2021. Improving End-to-End Task-Oriented Dialog System with A Simple Auxiliary Task. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1296–1303, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Improving End-to-End Task-Oriented Dialog System with A Simple Auxiliary Task (Lee, Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.112.pdf
Code
 bepoetree/mttod
Data
MultiWOZ