Guan-Yu Lin
2022
R-TeaFor: Regularized Teacher-Forcing for Abstractive Summarization
Guan-Yu Lin
|
Pu-Jen Cheng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Teacher-forcing is widely used in training sequence generation models to improve sampling efficiency and to stabilize training. However, teacher-forcing is vulnerable to the exposure bias problem. Previous works have attempted to address exposure bias by modifying the training data to simulate model-generated results. Nevertheless, they do not consider the pairwise relationship between the original training data and the modified ones, which provides more information during training. Hence, we propose Regularized Teacher-Forcing (R-TeaFor) to utilize this relationship for better regularization. Empirically, our experiments show that R-TeaFor outperforms previous summarization state-of-the-art models, and the results can be generalized to different pre-trained models.
Search