Yi Chun Lo


2025

pdf bib
Does Anaphora Resolution Improve LLM Fine-Tuning for Summarisation?
Yi Chun Lo | Ruslan Mitkov
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models

This study investigates whether adding anaphora resolution as a preprocessing step before fine-tuning the text summarisation application in LLM can improve the quality of summary output. Two sets of training with the T5-base model and BART-large model using the SAMSum dataset were conducted. One uses the original text and the other uses the text processed by a simplified version of MARS (Mitkov’s Anaphora Resolution System). The experiment reveals that when T5-base model is fine-tuned on the anaphora-resolved inputs, the ROUGE metrics are improved. In contrast, BART-large model only has a slight improvement after fine-tuning under the same conditions, which is not statistically significant. Further analysis of the generated summaries indicates that anaphora resolution is helpful in semantic alignment.