GreedyCAS: Unsupervised Scientific Abstract Segmentation with Normalized Mutual Information

Yingqiang Gao, Jessica Lam, Nianlong Gu, Richard Hahnloser


Abstract
The abstracts of scientific papers typically contain both premises (e.g., background and observations) and conclusions. Although conclusion sentences are highlighted in structured abstracts, in non-structured abstracts the concluding information is not explicitly marked, which makes the automatic segmentation of conclusions from scientific abstracts a challenging task. In this work, we explore Normalized Mutual Information (NMI) as a means for abstract segmentation. We consider each abstract as a recurrent cycle of sentences and place two segmentation boundaries by greedily optimizing the NMI score between the two segments, assuming that conclusions are strongly semantically linked with preceding premises. On non-structured abstracts, our proposed unsupervised approach GreedyCAS achieves the best performance across all evaluation metrics; on structured abstracts, GreedyCAS outperforms all baseline methods measured by Pk. The strong correlation of NMI to our evaluation metrics reveals the effectiveness of NMI for abstract segmentation.
Anthology ID:
2023.emnlp-main.372
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6093–6108
Language:
URL:
https://aclanthology.org/2023.emnlp-main.372
DOI:
10.18653/v1/2023.emnlp-main.372
Bibkey:
Cite (ACL):
Yingqiang Gao, Jessica Lam, Nianlong Gu, and Richard Hahnloser. 2023. GreedyCAS: Unsupervised Scientific Abstract Segmentation with Normalized Mutual Information. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6093–6108, Singapore. Association for Computational Linguistics.
Cite (Informal):
GreedyCAS: Unsupervised Scientific Abstract Segmentation with Normalized Mutual Information (Gao et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.372.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.372.mp4