HydraSum: Disentangling Style Features in Text Summarization with Multi-Decoder Models

Tanya Goyal, Nazneen Rajani, Wenhao Liu, Wojciech Kryscinski


Abstract
Summarization systems make numerous “decisions” about summary properties during inference, e.g. degree of copying, specificity and length of outputs, etc. However, these are implicitly encoded within model parameters and specific styles cannot be enforced. To address this, we introduce HydraSum, a new summarization architecture that extends the single decoder framework of current models to a mixture-of-experts version with multiple decoders. We show that HydraSum’s multiple decoders automatically learn contrasting summary styles when trained under the standard training objective without any extra supervision. Through experiments on three summarization datasets (CNN, Newsroom and XSum), we show that HydraSum provides a simple mechanism to obtain stylistically-diverse summaries by sampling from either individual decoders or their mixtures, outperforming baseline models. Finally, we demonstrate that a small modification to the gating strategy during training can enforce an even stricter style partitioning, e.g. high- vs low-abstractiveness or high- vs low-specificity, allowing users to sample from a larger area in the generation space and vary summary styles along multiple dimensions.
Anthology ID:
2022.emnlp-main.30
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
464–479
Language:
URL:
https://aclanthology.org/2022.emnlp-main.30
DOI:
10.18653/v1/2022.emnlp-main.30
Bibkey:
Cite (ACL):
Tanya Goyal, Nazneen Rajani, Wenhao Liu, and Wojciech Kryscinski. 2022. HydraSum: Disentangling Style Features in Text Summarization with Multi-Decoder Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 464–479, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
HydraSum: Disentangling Style Features in Text Summarization with Multi-Decoder Models (Goyal et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.30.pdf