Jiangnan Fang
2025
Multi-LLM Text Summarization
Jiangnan Fang
|
Cheng-Tse Liu
|
Jieun Kim
|
Yash Bhedaru
|
Ethan Liu
|
Nikhil Singh
|
Nedim Lipka
|
Puneet Mathur
|
Nesreen K. Ahmed
|
Franck Dernoncourt
|
Ryan Rossi
|
Hanieh Deilamsalehy
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
In this work, we propose a Multi-LLM summarization framework, and investigate two different multi-LLM strategies including centralized and decentralized. Our multi-LLM summarization framework has two fundamentally important steps at each round of conversation: generation and evaluation. These steps are different depending on whether our multi-LLM decentralized summarization is used or centralized. In both our multi-LLM decentralized and centralized strategies, we have k different LLMs that generate diverse summaries of the text. However, during evaluation, our multi-LLM centralized summarization approach leverages a single LLM to evaluate the summaries and select the best one whereas k LLMs are used for decentralized multi-LLM summarization. Overall, we find that our multi-LLM summarization approaches significantly outperform the baselines that leverage only a single LLM by up to 3x. These results indicate the effectiveness of multi-LLM approaches for summarization.
Search
Fix author
Co-authors
- Nesreen K. Ahmed 1
- Yash Bhedaru 1
- Hanieh Deilamsalehy 1
- Franck Dernoncourt 1
- Jieun Kim 1
- show all...