%0 Conference Proceedings %T Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization %A Li, Haoran %A Xu, Song %A Yuan, Peng %A Wang, Yujia %A Wu, Youzheng %A He, Xiaodong %A Zhou, Bowen %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F li-etal-2021-learn %X The copying mechanism has had considerable success in abstractive summarization, facilitating models to directly copy words from the input text to the output summary. Existing works mostly employ encoder-decoder attention, which applies copying at each time step independently of the former ones. However, this may sometimes lead to incomplete copying. In this paper, we propose a novel copying scheme named Correlational Copying Network (CoCoNet) that enhances the standard copying mechanism by keeping track of the copying history. It thereby takes advantage of prior copying distributions and, at each time step, explicitly encourages the model to copy the input word that is relevant to the previously copied one. In addition, we strengthen CoCoNet through pre-training with suitable corpora that simulate the copying behaviors. Experimental results show that CoCoNet can copy more accurately and achieves new state-of-the-art performances on summarization benchmarks, including CNN/DailyMail for news summarization and SAMSum for dialogue summarization. The code and checkpoint will be publicly available. %R 10.18653/v1/2021.emnlp-main.336 %U https://aclanthology.org/2021.emnlp-main.336 %U https://doi.org/10.18653/v1/2021.emnlp-main.336 %P 4091-4101