Xiangnan Ma
2024
RankPrompt: Step-by-Step Comparisons Make Language Models Better Reasoners
Chi Hu
|
Yuan Ge
|
Xiangnan Ma
|
Hang Cao
|
Qiang Li
|
Yonghua Yang
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Large Language Models (LLMs) have achieved impressive performance across various reasoning tasks. However, even state-of-the-art LLMs such as ChatGPT are prone to logical errors during their reasoning processes. Existing solutions, such as deploying task-specific verifiers or voting over multiple reasoning paths, either require extensive human annotations or fail in scenarios with inconsistent responses. To address these challenges, we introduce RankPrompt, a new prompting method that enables LLMs to self-rank their responses without additional resources. RankPrompt breaks down the ranking problem into a series of comparisons among diverse responses, leveraging the inherent capabilities of LLMs to generate chains of comparison as contextual exemplars. Our experiments across 11 arithmetic and commonsense reasoning tasks show that RankPrompt significantly enhances the reasoning performance of ChatGPT and GPT-4, with improvements of up to 13%. Moreover, RankPrompt excels in LLM-based automatic evaluations for open-ended tasks, aligning with human judgments 74% of the time in the AlpacaEval dataset. It also exhibits robustness to variations in response order and consistency. Collectively, our results validate RankPrompt as an effective method for eliciting high-quality feedback from language models.
2021
RankNAS: Efficient Neural Architecture Search by Pairwise Ranking
Chi Hu
|
Chenglong Wang
|
Xiangnan Ma
|
Xia Meng
|
Yinqiao Li
|
Tong Xiao
|
Jingbo Zhu
|
Changliang Li
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
This paper addresses the efficiency challenge of Neural Architecture Search (NAS) by formulating the task as a ranking problem. Previous methods require numerous training examples to estimate the accurate performance of architectures, although the actual goal is to find the distinction between “good” and “bad” candidates. Here we do not resort to performance predictors. Instead, we propose a performance ranking method (RankNAS) via pairwise ranking. It enables efficient architecture search using much fewer training examples. Moreover, we develop an architecture selection method to prune the search space and concentrate on more promising candidates. Extensive experiments on machine translation and language modeling tasks show that RankNAS can design high-performance architectures while being orders of magnitude faster than state-of-the-art NAS systems.
Search
Co-authors
- Chi Hu 2
- Tong Xiao 2
- Jingbo Zhu 2
- Chenglong Wang 1
- Xia Meng 1
- show all...