Yilin Cao
2025
Perspective-driven Preference Optimization with Entropy Maximization for Diverse Argument Generation
Yilin Cao
|
Ruike Zhang
|
Penghui Wei
|
Qingchao Kong
|
Wenji Mao
Findings of the Association for Computational Linguistics: EMNLP 2025
In subjective natural language generation tasks, generating diverse perspectives is essential for fostering balanced discourse and mitigating bias. Argument generation with diverse perspectives plays a vital role in advancing the understanding of controversial claims. Despite the strong generative capabilities of large language models (LLMs), the diversity of perspectives remains insufficiently explored within argument generation task. Moreover, there remains a significant research gap in developing methods that explicitly generate multi-perspective arguments under the quality control of claim-stance alignment constraints. In this paper, we propose POEM, a Perspective-aware Preference Optimization with Entropy Maximization framework for diverse argument generation. It enhances perspective diversity through preference optimization based on the constructed preference dataset via perspective mining and diversity measuring. It further introduces entropy maximization to promote perspective diversity by encouraging dispersed semantic representations among the generated arguments. Experimental results on claim-stance argument generation benchmarks show that POEM is capable of generating diverse arguments while maintaining comparable performances in claim and stance controllability as well as text quality compared to the state-of-the-art baselines and human evaluation.
2024
TARA: Token-level Attribute Relation Adaptation for Multi-Attribute Controllable Text Generation
Yilin Cao
|
Jiahao Zhao
|
Ruike Zhang
|
Hanyi Zou
|
Wenji Mao
Findings of the Association for Computational Linguistics: EMNLP 2024
Search
Fix author
Co-authors
- Wenji Mao 2
- Ruike Zhang 2
- Qingchao Kong 1
- Penghui Wei 1
- Jiahao Zhao 1
- show all...