Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding

Xin Liu, Farima Fatahi Bayat, Lu Wang


Abstract
Calibrating language models (LMs) aligns their generation confidence with the actual likelihood of answer correctness, which can inform users about LMs’ reliability and mitigate hallucinated content. However, prior calibration methods, such as self-consistency-based and logit-based approaches, are either limited in inference-time efficiency or fall short of providing informative signals. Moreover, simply filtering out low-confidence responses reduces the LM’s helpfulness when the answers are correct. Therefore, effectively using calibration techniques to enhance an LM’s factuality remains an unsolved challenge. In this paper, we first propose an activation-based calibration method, ActCab, which trains a linear layer on top of the LM’s last-layer activations that can better capture the representations of knowledge. Built on top of ActCab, we further propose CoDec, a confidence-guided decoding strategy to elicit truthful answers with high confidence from LMs. By evaluating on five popular QA benchmarks, ActCab achieves superior calibration performance than all competitive baselines, e.g., by reducing the average expected calibration error (ECE) score by up to 39%. Further experiments on CoDec show consistent improvements in several LMs’ factuality on challenging QA datasets, such as TruthfulQA, highlighting the value of confidence signals in enhancing the factuality.
Anthology ID:
2024.emnlp-main.583
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10436–10448
Language:
URL:
https://aclanthology.org/2024.emnlp-main.583
DOI:
Bibkey:
Cite (ACL):
Xin Liu, Farima Fatahi Bayat, and Lu Wang. 2024. Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 10436–10448, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding (Liu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.583.pdf