%0 Conference Proceedings %T Domain-Lifelong Learning for Dialogue State Tracking via Knowledge Preservation Networks %A Liu, Qingbin %A Cao, Pengfei %A Liu, Cao %A Chen, Jiansong %A Cai, Xunliang %A Yang, Fan %A He, Shizhu %A Liu, Kang %A Zhao, Jun %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F liu-etal-2021-domain %X Dialogue state tracking (DST), which estimates user goals given a dialogue context, is an essential component of task-oriented dialogue systems. Conventional DST models are usually trained offline, which requires a fixed dataset prepared in advance. This paradigm is often impractical in real-world applications since online dialogue systems usually involve continually emerging new data and domains. Therefore, this paper explores Domain-Lifelong Learning for Dialogue State Tracking (DLL-DST), which aims to continually train a DST model on new data to learn incessantly emerging new domains while avoiding catastrophically forgetting old learned domains. To this end, we propose a novel domain-lifelong learning method, called Knowledge Preservation Networks (KPN), which consists of multi-prototype enhanced retrospection and multi-strategy knowledge distillation, to solve the problems of expression diversity and combinatorial explosion in the DLL-DST task. Experimental results show that KPN effectively alleviates catastrophic forgetting and outperforms previous state-of-the-art lifelong learning methods by 4.25% and 8.27% of whole joint goal accuracy on the MultiWOZ benchmark and the SGD benchmark, respectively. %R 10.18653/v1/2021.emnlp-main.176 %U https://aclanthology.org/2021.emnlp-main.176 %U https://doi.org/10.18653/v1/2021.emnlp-main.176 %P 2301-2311