Xuan Qi
2026
DebateQA: Evaluating Question Answering on Debatable Knowledge
Rongwu Xu | Xuan Qi | Zehan Qi | Wei Xu | Zhijiang Guo
Findings of the Association for Computational Linguistics: EACL 2026
Rongwu Xu | Xuan Qi | Zehan Qi | Wei Xu | Zhijiang Guo
Findings of the Association for Computational Linguistics: EACL 2026
The rise of large language models (LLMs) has enabled us to seek answers to inherently debatable questions on LLM chatbots, necessitating a reliable way to evaluate their ability. However, traditional QA benchmarks assume fixed answers are inadequate for this purpose. To address this, we introduce DebateQA, a dataset of 2,941 debatable questions, each accompanied by multiple human-annotated partial answers that capture a variety of perspectives. We develop two metrics: Perspective Diversity, which evaluates the comprehensiveness of perspectives, and Dispute Awareness, which assesses if the LLM acknowledges the question’s debatable nature. Experiments demonstrate that both metrics are aligned with human preferences and stable across different underlying models. Using DebateQA with two metrics, we assess 12 prevalent LLMs and retrieval-augmented generation methods. Our findings reveal that while LLMs generally excel at recognizing debatable issues, their ability to provide comprehensive answers encompassing diverse perspectives varies considerably.
2025
A Systematic Survey of Automatic Prompt Optimization Techniques
Kiran Ramnath | Kang Zhou | Sheng Guan | Soumya Smruti Mishra | Xuan Qi | Zhengyuan Shen | Shuai Wang | Sangmin Woo | Sullam Jeoung | Yawei Wang | Haozhu Wang | Han Ding | Yuzhe Lu | Zhichao Xu | Yun Zhou | Balasubramaniam Srinivasan | Qiaojing Yan | Yueyan Chen | Haibo Ding | Panpan Xu | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Kiran Ramnath | Kang Zhou | Sheng Guan | Soumya Smruti Mishra | Xuan Qi | Zhengyuan Shen | Shuai Wang | Sangmin Woo | Sullam Jeoung | Yawei Wang | Haozhu Wang | Han Ding | Yuzhe Lu | Zhichao Xu | Yun Zhou | Balasubramaniam Srinivasan | Qiaojing Yan | Yueyan Chen | Haibo Ding | Panpan Xu | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Since the advent of large language models (LLMs), prompt engineering has been a crucial step for eliciting desired responses for various Natural Language Processing (NLP) tasks. However, prompt engineering remains an impediment for end users due to rapid advances in models, tasks, and associated best practices. To mitigate this, Automatic Prompt Optimization (APO) techniques have recently emerged that use various automated techniques to help improve the performance of LLMs on various tasks. In this paper, we present a comprehensive survey summarizing the current progress and remaining challenges in this field. We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein. We hope to spur further research guided by our framework.
Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?
Xuan Qi | Jiahao Qiu | Xinzhe Juan | Yue Wu | Mengdi Wang
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Xuan Qi | Jiahao Qiu | Xinzhe Juan | Yue Wu | Mengdi Wang
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Aligning large language models (LLMs) with human preferences remains a key challenge in AI. Preference-based optimization methods, such as Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO), rely on human-annotated datasets to improve alignment. In this work, we identify a crucial property of the existing learning method: the distinguishing signal obtained in preferred responses is often concentrated in the early tokens. We refer to this as shallow preference signals.To explore this property, we systematically truncate preference datasets at various points and train both reward models and DPO models on the truncated data. Surprisingly, models trained on truncated datasets, retaining only the first half or fewer tokens, achieve comparable or even superior performance to those trained on full datasets. For example, a reward model trained on the Skywork-Reward-Preference-80K-v0.2 dataset outperforms the full dataset when trained on a 40% truncated dataset. This pattern is consistent across multiple datasets, suggesting the widespread presence of shallow preference signals.We further investigate the distribution of the reward signal through decoding strategies. We consider two simple decoding strategies motivated by the shallow reward signal observation, namely Length Control Decoding and KL Threshold Control Decoding, which leverage shallow preference signals to optimize the trade-off between alignment and computational efficiency. The performance is even better, which again validates our hypothesis.The phenomenon of shallow preference signals highlights potential issues in LLM alignment: existing alignment methods often focus on aligning only the initial tokens of responses, rather than considering the full response. This could lead to discrepancies with real-world human preferences, resulting in suboptimal alignment performance.
Search
Fix author
Co-authors
- Yueyan Chen 1
- Lin Lee Cheong 1
- Han Ding 1
- Haibo Ding 1
- Sheng Guan 1
- Zhijiang Guo 1
- Sullam Jeoung 1
- Xinzhe Juan 1
- Yuzhe Lu 1
- Soumya Smruti Mishra 1
- Zehan Qi 1
- Jiahao Qiu 1
- Kiran Ramnath 1
- Zhengyuan Shen 1
- Balasubramaniam Srinivasan 1
- Shuai Wang 1
- Yawei Wang 1
- Haozhu Wang 1
- Mengdi Wang 1
- Sangmin Woo 1
- Yue Wu 1
- Zhichao Xu 1
- Panpan Xu 1
- Rongwu Xu 1
- Wei Xu 1
- Qiaojing Yan 1
- Kang Zhou 1
- Yun Zhou 1