Yixin Wu
2024
Searching for Best Practices in Retrieval-Augmented Generation
Xiaohua Wang
|
Zhenghua Wang
|
Xuan Gao
|
Feiran Zhang
|
Yixin Wu
|
Zhibo Xu
|
Tianyuan Shi
|
Zhengyuan Wang
|
Shizheng Li
|
Qi Qian
|
Ruicheng Yin
|
Changze Lv
|
Xiaoqing Zheng
|
Xuanjing Huang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been proposed to enhance large language models through query-dependent retrievals, these approaches still suffer from their complex implementation and prolonged response times. Typically, a RAG workflow involves multiple processing steps, each of which can be executed in various ways. Here, we investigate existing RAG approaches and their potential combinations to identify optimal RAG practices. Through extensive experiments, we suggest several strategies for deploying RAG that balance both performance and efficiency. Moreover, we demonstrate that multimodal retrieval techniques can significantly enhance question-answering capabilities about visual inputs and accelerate the generation of multimodal content using a “retrieval as generation” strategy.
The Death and Life of Great Prompts: Analyzing the Evolution of LLM Prompts from the Structural Perspective
Yihan Ma
|
Xinyue Shen
|
Yixin Wu
|
Boyang Zhang
|
Michael Backes
|
Yang Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Effective utilization of large language models (LLMs), such as ChatGPT, relies on the quality of input prompts. This paper explores prompt engineering, specifically focusing on the disparity between experimentally designed prompts and real-world “in-the-wild” prompts. We analyze 10,538 in-the-wild prompts collected from various platforms and develop a framework that decomposes the prompts into eight key components. Our analysis shows that and Requirement are the most prevalent two components. Roles specified in the prompts, along with their capabilities, have become increasingly varied over time, signifying a broader range of application scenarios for LLMs. However, from the response of GPT-4, there is a marginal improvement with a specified role, whereas leveraging less prevalent components such as Capability and Demonstration can result in a more satisfying response. Overall, our work sheds light on the essential components of in-the-wild prompts and the effectiveness of these components on the broader landscape of LLM prompt engineering, providing valuable guidelines for the LLM community to optimize high-quality prompts.
Search
Co-authors
- Xiaohua Wang 1
- Zhenghua Wang 1
- Xuan Gao 1
- Feiran Zhang 1
- Zhibo Xu 1
- show all...