Lingxi Zhang
Also published as: LingXi Zhang
2024
ARL2: Aligning Retrievers with Black-box Large Language Models via Self-guided Adaptive Relevance Labeling
LingXi Zhang
|
Yue Yu
|
Kuan Wang
|
Chao Zhang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Retrieval-augmented generation enhances large language models (LLMs) by incorporating relevant information from external knowledge sources. This enables LLMs to adapt to specific domains and mitigate hallucinations in knowledge-intensive tasks. However, existing retrievers are often misaligned with LLMs due to separate training processes and the inherent black-box nature of LLMs. To address this challenge, we propose ARL2, a retriever learning technique that harnesses LLMs as labelers. ARL2 leverages LLMs to annotate and score adaptive relevance evidence, enabling the retriever to learn from robust LLM supervision. Furthermore, ARL2 incorporates a self-training strategy to minimize the cost of API calls. Extensive experiments demonstrate the effectiveness of ARL2, achieving accuracy improvements of 5.4% on NQ and 4.6% on MMLU compared to the state-of-the-art methods. Additionally, ARL2 exhibits robust transfer learning capabilities and strong zero-shot generalization abilities.
2023
FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base Question Answering
Lingxi Zhang
|
Jing Zhang
|
Yanling Wang
|
Shulin Cao
|
Xinmei Huang
|
Cuiping Li
|
Hong Chen
|
Juanzi Li
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The generalization problem on KBQA has drawn considerable attention. Existing research suffers from the generalization issue brought by the entanglement in the coarse-grained modeling of the logical expression, or inexecutability issues due to the fine-grained modeling of disconnected classes and relations in real KBs. We propose a Fine-to-Coarse Composition framework for KBQA (FC-KBQA) to both ensure the generalization ability and executability of the logical expression. The main idea of FC-KBQA is to extract relevant fine-grained knowledge components from KB and reformulate them into middle-grained knowledge pairs for generating the final logical expressions. FC-KBQA derives new state-of-the-art performance on GrailQA and WebQSP, and runs 4 times faster than the baseline. Our code is now available at GitHub https://github.com/RUCKBReasoning/FC-KBQA.
Search
Co-authors
- Jing Zhang 1
- Yanling Wang 1
- Shulin Cao 1
- Xinmei Huang 1
- Cuiping Li 1
- show all...
Venues
- acl2