Reconsidering the Past: Optimizing Hidden States in Language Models

Davis Yoshida, Kevin Gimpel


Abstract
We present Hidden-State Optimization (HSO), a gradient-based method for improving the performance of transformer language models at inference time. Similar to dynamic evaluation (Krause et al., 2018), HSO computes the gradient of the log-probability the language model assigns to an evaluation text, but uses it to update the cached hidden states rather than the model parameters. We test HSO with pretrained Transformer-XL and GPT-2 language models, finding improvement on the WikiText-103 and PG-19 datasets in terms of perplexity, especially when evaluating a model outside of its training distribution. We also demonstrate downstream applicability by showing gains in the recently developed prompt-based few-shot evaluation setting, again with no extra parameters or training data.
Anthology ID:
2021.findings-emnlp.346
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4099–4105
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.346
DOI:
10.18653/v1/2021.findings-emnlp.346
Bibkey:
Cite (ACL):
Davis Yoshida and Kevin Gimpel. 2021. Reconsidering the Past: Optimizing Hidden States in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4099–4105, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Reconsidering the Past: Optimizing Hidden States in Language Models (Yoshida & Gimpel, Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.346.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.346.mp4
Data
AG NewsPG-19SSTSST-2WikiText-103WikiText-2