Noisy Self-Knowledge Distillation for Text Summarization

Yang Liu, Sheng Shen, Mirella Lapata


Abstract
In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets. Instead of relying on one-hot annotation labels, our student summarization model is trained with guidance from a teacher which generates smoothed labels to help regularize training. Furthermore, to better model uncertainty during training, we introduce multiple noise signals for both teacher and student models. We demonstrate experimentally on three benchmarks that our framework boosts the performance of both pretrained and non-pretrained summarizers achieving state-of-the-art results.
Anthology ID:
2021.naacl-main.56
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
692–703
Language:
URL:
https://aclanthology.org/2021.naacl-main.56
DOI:
10.18653/v1/2021.naacl-main.56
Bibkey:
Cite (ACL):
Yang Liu, Sheng Shen, and Mirella Lapata. 2021. Noisy Self-Knowledge Distillation for Text Summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 692–703, Online. Association for Computational Linguistics.
Cite (Informal):
Noisy Self-Knowledge Distillation for Text Summarization (Liu et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.56.pdf
Video:
 https://aclanthology.org/2021.naacl-main.56.mp4
Code
 nlpyang/NoisySumm
Data
CNN/Daily MailWikiCatSumWikiSum