%0 Conference Proceedings %T Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting %A Sun, Qingfeng %A Xu, Can %A Hu, Huang %A Wang, Yujing %A Miao, Jian %A Geng, Xiubo %A Chen, Yining %A Xu, Fei %A Jiang, Daxin %Y Carpuat, Marine %Y de Marneffe, Marie-Catherine %Y Meza Ruiz, Ivan Vladimir %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F sun-etal-2022-stylized %X Current Knowledge-Grounded Dialogue Generation (KDG) models specialize in producing rational and factual responses. However, to establish long-term relationships with users, the KDG model needs the capability to generate responses in a desired style or attribute. Thus, we study a new problem: Stylized Knowledge-Grounded Dialogue Generation (SKDG). It presents two challenges: (1) How to train a SKDG model where no \textlesscontext, knowledge, stylized response\textgreater triples are available. (2) How to cohere with context and preserve the knowledge when generating a stylized response. In this paper, we propose a novel disentangled template rewriting (DTR) method which generates responses via combing disentangled style templates (from monolingual stylized corpus) and content templates (from KDG corpus). The entire framework is end-to-end differentiable and learned without supervision. Extensive experiments on two benchmarks indicate that DTR achieves a significant improvement on all evaluation metrics compared with previous state-of-the-art stylized dialogue generation methods. Besides, DTR achieves comparable performance with the state-of-the-art KDG methods in standard KDG evaluation setting. %R 10.18653/v1/2022.naacl-main.241 %U https://aclanthology.org/2022.naacl-main.241 %U https://doi.org/10.18653/v1/2022.naacl-main.241 %P 3304-3318