Junsoo Park
2024
OffsetBias: Leveraging Debiased Data for Tuning Evaluators
Junsoo Park
|
Seungyeon Jwa
|
Ren Meiying
|
Daeyoung Kim
|
Sanghyuk Choi
Findings of the Association for Computational Linguistics: EMNLP 2024
Employing Large Language Models (LLMs) to assess the quality of generated responses has become a widely adopted evaluation method. Specifically, instruct-tuned models and fine-tuned judge models based on open-source LLMs have been reported. While it is known that judge models are vulnerable to certain biases, such as favoring longer answers regardless of content, the specifics of these biases remain under-explored. In this work, we qualitatively identify six types of biases inherent in various judge models. We propose EvalBiasBench as a meta-evaluation collection of hand-crafted test cases for each bias type. Additionally, we present de-biasing dataset construction methods and the associated preference dataset OffsetBias. Experimental results demonstrate that fine-tuning on our dataset significantly enhances the robustness of judge models against biases and improves performance across most evaluation scenarios. We release our datasets and the fine-tuned judge model to public.
2022
HaRiM+: Evaluating Summary Quality with Hallucination Risk
Seonil (Simon) Son
|
Junsoo Park
|
Jeong-in Hwang
|
Junghwa Lee
|
Hyungjong Noh
|
Yeonsoo Lee
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
One of the challenges of developing a summarization model arises from the difficulty in measuring the factual inconsistency of the generated text. In this study, we reinterpret the decoder overconfidence-regularizing objective suggested in (Miao et al., 2021) as a hallucination risk measurement to better estimate the quality of generated summaries. We propose a reference-free metric, HaRiM+, which only requires an off-the-shelf summarization model to compute the hallucination risk based on token likelihoods. Deploying it requires no additional training of models or ad-hoc modules, which usually need alignment to human judgments. For summary-quality estimation, HaRiM+ records state-of-the-art correlation to human judgment on three summary-quality annotation sets: FRANK, QAGS, and SummEval. We hope that our work, which merits the use of summarization models, facilitates the progress of both automated evaluation and generation of summary.
Search
Co-authors
- Seungyeon Jwa 1
- Ren Meiying 1
- Daeyoung Kim 1
- Sanghyuk Choi 1
- Seonil (Simon) Son 1
- show all...