Can LLMs Clarify? Investigation and Enhancement of Large Language Models on Argument Claim Optimization

Yiran Wang, Ben He, Xuanang Chen, Le Sun


Abstract
In argumentation, the claim is the foundational proposition that underpins the argument, serving as the central pillar upon which the argument is constructed. It guides the subsequent presentation of evidence, reasoning, and analysis, thereby facilitating the audience’s understanding of the core issue. Therefore, ensuring that the claim is precise and unambiguous is crucial for constructing a coherent and persuasive argument. While Large Language Models (LLMs) have demonstrated proficiency in text rewriting tasks such as style transfer and query rewriting, their application to claim optimization remains unexplored. Unlike other rewriting tasks, claim clarification requires the model to rewrite ambiguous or unclear segments of the claim, enhance the content by adding omitted key details, and eliminate redundant or verbose elements. Addressing this gap, this paper evaluates the performance of LLMs on the claim clarification task across various settings. While popular rewriting evaluation methods such as BLEU and Rouge rely on exact word matching, this paper introduces a novel semantic evaluation approach based on a sliding window mechanism. Three distinct LLMs, including LLama2, Mistral, and Qwen2, are assessed for their ability to clarify arguments through zero-shot or few-shot prompting, and supervised fine-tuning (SFT). Additionally, we propose a reinforcement learning-based clarification approach that optimally balances content preservation with claim clarity, thereby augmenting the performance of LLMs on the claim clarification task.
Anthology ID:
2025.coling-main.273
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4066–4077
Language:
URL:
https://aclanthology.org/2025.coling-main.273/
DOI:
Bibkey:
Cite (ACL):
Yiran Wang, Ben He, Xuanang Chen, and Le Sun. 2025. Can LLMs Clarify? Investigation and Enhancement of Large Language Models on Argument Claim Optimization. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4066–4077, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Can LLMs Clarify? Investigation and Enhancement of Large Language Models on Argument Claim Optimization (Wang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.273.pdf