Jian Luo


2025

"近年来,大型语言模型如ChatGPT显著提高了机器对自然语言的理解能力,其中,问答推理任务在推动语言理解能力和人机交互智能化方面具有重要意义,但目前仍面临诸多挑战。本文针对现有大模型资源消耗大、小模型推理能力弱,低资源语言推理能力受限等问题,提出了融合思维链和微调技术的方法,通过Human-Thinking提示策略优化大模型推理能力,并借助大模型指令微调提升小模型推理性能,引入多角色协作机制进一步优化推理步骤质量。通过探索跨语言思维链提示方法,利用高资源语言知识弥补低资源语言不足,采用双通道机制和投票打分机制整合不同语言推理知识,提升模型在低资源语言的推理表现。实验结果表明,本文方法能有效提升小型模型在多语言问答推理的能力,具有一定的研究价值。"

2024

Pairwise Ranking Prompting (PRP) demonstrates impressive effectiveness in zero-shot document re-ranking tasks with large language models (LLMs). However, in the existing methods, PRP only outputs the same label for the comparison results of different confidence intervals without considering the uncertainty of pairwise comparison, which implies an underutilization of the generation probability information of LLMs. To bridge this gap, we propose PRP-Graph, a novel pairwise re-ranking approach, based on a refined scoring PRP unit that exploits the output probabilities of target labels to capture the degree of certainty of the comparison results. Specifically, the PRP-Graph consists of two stages, namely ranking graph construction and ranking graph aggregation. Extensive experiments conducted on the BEIR benchmark demonstrate the superiority of our approach over existing PRP-based methods. Comprehensive analysis reveals that the PRP-Graph displays strong robustness towards the initial ranking order and delivers exceptional re-ranking results with acceptable efficiency. Our code and data are available at https://github.com/Memelank/PRP-Graph.