EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning

Raja Sekhar Reddy Mekala, Yasaman Razeghi, Sameer Singh


Abstract
Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques,such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is tailored for four scenarios, including standard and chain-of-thought prompting, in both zero-shot and few-shot settings. Experimental results show that EchoPrompt yields substantial improvementsacross all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g., GSM8K, SVAMP), reading comprehension (e.g., DROP), and logical reasoning (e.g., Coin flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance.
Anthology ID:
2024.naacl-short.35
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
399–432
Language:
URL:
https://aclanthology.org/2024.naacl-short.35
DOI:
10.18653/v1/2024.naacl-short.35
Bibkey:
Cite (ACL):
Raja Sekhar Reddy Mekala, Yasaman Razeghi, and Sameer Singh. 2024. EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 399–432, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning (Mekala et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-short.35.pdf