Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning

Yue Yu, Jiaming Shen, Tianqi Liu, Zhen Qin, Jing Nathan Yan, Jialu Liu, Chao Zhang, Michael Bendersky


Abstract
Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks with a few demonstration examples via in-context learning. Common strategies to boost such “in-context” learning ability are to ensemble multiple model decoded results and require the model to generate an explanation along with the prediction. However, these models often treat different class predictions equally and neglect the potential discrepancy between the explanations and predictions. To fully unleash the power of explanations, we propose EASE, an Explanation-Aware Soft Ensemble framework to empower in-context learning with LLMs. We design two techniques, explanation-guided ensemble, and soft probability aggregation, to mitigate the effect of unreliable explanations and improve the consistency between explanations and final predictions. Experiments on seven natural language understanding tasks and four varying-size LLMs demonstrate the effectiveness of our proposed framework.
Anthology ID:
2024.acl-long.755
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14002–14024
Language:
URL:
https://aclanthology.org/2024.acl-long.755
DOI:
10.18653/v1/2024.acl-long.755
Bibkey:
Cite (ACL):
Yue Yu, Jiaming Shen, Tianqi Liu, Zhen Qin, Jing Nathan Yan, Jialu Liu, Chao Zhang, and Michael Bendersky. 2024. Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14002–14024, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning (Yu et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.755.pdf