LLMCrit: Teaching Large Language Models to Use Criteria

Weizhe Yuan, Pengfei Liu, Matthias Gallé


Abstract
Humans follow criteria when they execute tasks, and these criteria are directly used to assess the quality of task completion. Therefore, having models learn to use criteria to provide feedback can help humans or models to perform tasks better. However, current research in this area tends to consider only a limited number of criteria, or only a limited number of quality assessment aspects. To fill this gap, we propose a general framework that enables large language models (LLMs) to use comprehensive criteria for a task in delivering natural language feedback on task execution. In particular, we present a model-in-the-loop framework that semi-automatically derives criteria from collected guidelines for different writing tasks and constructs in-context demonstrations for each criterion. We choose three tasks from real-world scenarios to operationalize this idea: paper introduction writing, Python code writing, and Reddit post writing, and evaluate our feedback generation framework using different LLMs. The results reveal the fine-grained effects of adding criteria and demonstrations and provide valuable guidance on how to teach LLMs to use criteria more effectively.
Anthology ID:
2024.findings-acl.472
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7929–7960
Language:
URL:
https://aclanthology.org/2024.findings-acl.472
DOI:
10.18653/v1/2024.findings-acl.472
Bibkey:
Cite (ACL):
Weizhe Yuan, Pengfei Liu, and Matthias Gallé. 2024. LLMCrit: Teaching Large Language Models to Use Criteria. In Findings of the Association for Computational Linguistics: ACL 2024, pages 7929–7960, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
LLMCrit: Teaching Large Language Models to Use Criteria (Yuan et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.472.pdf