Vaishakh Keshava
2023
Multi-Dimensional Evaluation of Text Summarization with In-Context Learning
Sameer Jain
|
Vaishakh Keshava
|
Swarnashree Mysore Sathyendra
|
Patrick Fernandes
|
Pengfei Liu
|
Graham Neubig
|
Chunting Zhou
Findings of the Association for Computational Linguistics: ACL 2023
Evaluation of natural language generation (NLG) is complex and multi-dimensional. Generated text can be evaluated for fluency, coherence, factuality, or any other dimensions of interest. Most frameworks that perform such multi-dimensional evaluation require training on large manually or synthetically generated datasets. In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning, obviating the need for large training datasets. Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization, establishing state-of-the-art on dimensions such as relevance and factual consistency. We then analyze the effects of factors such as the selection and number of in-context examples on performance. Finally, we study the efficacy of in-context learning-based evaluators in evaluating zero-shot summaries written by large language models such as GPT-3.
Search
Co-authors
- Sameer Jain 1
- Swarnashree Mysore Sathyendra 1
- Patrick Fernandes 1
- Pengfei Liu 1
- Graham Neubig 1
- show all...