%0 Conference Proceedings %T RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models %A Wang, Lingzhi %A Hu, Huang %A Sha, Lei %A Xu, Can %A Jiang, Daxin %A Wong, Kam-Fai %Y He, Yulan %Y Ji, Heng %Y Li, Sujian %Y Liu, Yang %Y Chang, Chua-Hui %S Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) %D 2022 %8 November %I Association for Computational Linguistics %C Online only %F wang-etal-2022-recindial %X Conversational Recommender System (CRS), which aims to recommend high-quality items to users through interactive conversations, has gained great research interest recently. A CRS is usually composed of a recommendation module and a generation module. In the previous work, these two modules are loosely connected in the model training and are shallowly integrated during inference, where a simple switching or copy mechanism is adopted to incorporate recommended items into generated responses. Moreover, the current end-to-end neural models trained on small crowd-sourcing datasets (e.g., 10K dialogs in the ReDial dataset) tend to overfit and have poor chit-chat ability. In this work, we propose a novel unified framework that integrates recommendation into the dialog (RecInDial) generation by introducing a vocabulary pointer. To tackle the low-resource issue in CRS, we finetune the large-scale pretrained language models to generate fluent and diverse responses, and introduce a knowledge-aware bias learned from an entity-oriented knowledge graph to enhance the recommendation performance. Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work. Experiments on the benchmark dataset ReDial show our RecInDial model significantly surpasses the state-of-the-art methods. More extensive analyses show the effectiveness of our model. %U https://aclanthology.org/2022.aacl-main.37 %P 489-500