Changbin Li
2024
Can We Trust the Performance Evaluation of Uncertainty Estimation Methods in Text Summarization?
Jianfeng He
|
Runing Yang
|
Linlin Yu
|
Changbin Li
|
Ruoxi Jia
|
Feng Chen
|
Ming Jin
|
Chang-Tien Lu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques. Our code and data are available: https://github.com/he159ok/Benchmark-of-Uncertainty-Estimation-Methods-in-Text-Summarization.
Search
Co-authors
- Jianfeng He 1
- Runing Yang 1
- Linlin Yu 1
- Ruoxi Jia 1
- Feng Chen 1
- show all...