EDU-level Extractive Summarization with Varying Summary Lengths

Yuping Wu, Ching-Hsun Tseng, Jiayu Shang, Shengzhong Mao, Goran Nenadic, Xiao-Jun Zeng


Abstract
Extractive models usually formulate text summarization as extracting fixed top-k salient sentences from the document as a summary. Few works exploited extracting finer-grained Elementary Discourse Unit (EDU) with little analysis and justification for the extractive unit selection. Further, the selection strategy of the fixed top-k salient sentences fits the summarization need poorly, as the number of salient sentences in different documents varies and therefore a common or best k does not exist in reality. To fill these gaps, this paper first conducts the comparison analysis of oracle summaries based on EDUs and sentences, which provides evidence from both theoretical and experimental perspectives to justify and quantify that EDUs make summaries with higher automatic evaluation scores than sentences. Then, considering this merit of EDUs, this paper further proposes an EDU-level extractive model with Varying summary Lengths (EDU-VL) and develops the corresponding learning algorithm. EDU-VL learns to encode and predict probabilities of EDUs in the document, generate multiple candidate summaries with varying lengths based on various k values, and encode and score candidate summaries, in an end-to-end training manner. Finally, EDU-VL is experimented on single and multi-document benchmark datasets and shows improved performances on ROUGE scores in comparison with state-of-the-art extractive models, and further human evaluation suggests that EDU-constituent summaries maintain good grammaticality and readability.
Anthology ID:
2023.findings-eacl.123
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1655–1667
Language:
URL:
https://aclanthology.org/2023.findings-eacl.123
DOI:
10.18653/v1/2023.findings-eacl.123
Bibkey:
Cite (ACL):
Yuping Wu, Ching-Hsun Tseng, Jiayu Shang, Shengzhong Mao, Goran Nenadic, and Xiao-Jun Zeng. 2023. EDU-level Extractive Summarization with Varying Summary Lengths. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1655–1667, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
EDU-level Extractive Summarization with Varying Summary Lengths (Wu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.123.pdf
Software:
 2023.findings-eacl.123.software.zip
Video:
 https://aclanthology.org/2023.findings-eacl.123.mp4