Learning to Rank Utterances for Query-Focused Meeting Summarization

Xingxian Liu, Yajing Xu


Abstract
Query-focused meeting summarization(QFMS) aims to generate a specific summary for the given query according to the meeting transcripts. Due to the conflict between long meetings and limited input size, previous works mainly adopt extract-then-summarize methods, which use extractors to simulate binary labels or ROUGE scores to extract utterances related to the query and then generate a summary. However, the previous approach fails to fully use the comparison between utterances. To the extractor, comparison orders are more important than specific scores. In this paper, we propose a Ranker-Generator framework. It learns to rank the utterances by comparing them in pairs and learning from the global orders, then uses top utterances as the generator’s input. We show that learning to rank utterances helps to select utterances related to the query effectively, and the summarizer can benefit from it. Experimental results on QMSum show that the proposed model outperforms all existing multi-stage models with fewer parameters.
Anthology ID:
2023.findings-acl.538
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8496–8505
Language:
URL:
https://aclanthology.org/2023.findings-acl.538
DOI:
10.18653/v1/2023.findings-acl.538
Bibkey:
Cite (ACL):
Xingxian Liu and Yajing Xu. 2023. Learning to Rank Utterances for Query-Focused Meeting Summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8496–8505, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Learning to Rank Utterances for Query-Focused Meeting Summarization (Liu & Xu, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.538.pdf