Akankshya Mishra
2024
STORYSUMM: Evaluating Faithfulness in Story Summarization
Melanie Subbiah
|
Faisal Ladhak
|
Akankshya Mishra
|
Griffin Adams
|
Lydia Chilton
|
Kathleen McKeown
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Human evaluation has been the gold standard for checking faithfulness in abstractive summarization. However, with a challenging source domain like narrative, multiple annotators can agree a summary is faithful, while missing details that are obvious errors only once pointed out. We therefore introduce a new dataset, StorySumm, comprising LLM summaries of short stories with localized faithfulness labels and error explanations. This benchmark is for evaluation methods, testing whether a given method can detect challenging inconsistencies. Using this dataset, we first show that any one human annotation protocol is likely to miss inconsistencies, and we advocate for pursuing a range of methods when establishing ground truth for a summarization dataset. We finally test recent automatic metrics and find that none of them achieve more than 70% balanced accuracy on this task, demonstrating that it is a challenging benchmark for future work in faithfulness evaluation.
Search