Adaptive Semiparametric Language Models

Dani Yogatama, Cyprien de Masson d’Autume, Lingpeng Kong


Abstract
We present a language model that combines a large parametric neural network (i.e., a transformer) with a non-parametric episodic memory component in an integrated architecture. Our model uses extended short-term context by caching local hidden states—similar to transformer-XL—and global long-term memory by retrieving a set of nearest neighbor tokens at each timestep. We design a gating function to adaptively combine multiple information sources to make a prediction. This mechanism allows the model to use either local context, short-term memory, or long-term memory (or any combination of them) on an ad hoc basis depending on the context. Experiments on word-based and character-based language modeling datasets demonstrate the efficacy of our proposed method compared to strong baselines.
Anthology ID:
2021.tacl-1.22
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
362–373
Language:
URL:
https://aclanthology.org/2021.tacl-1.22
DOI:
10.1162/tacl_a_00371
Bibkey:
Cite (ACL):
Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. Adaptive Semiparametric Language Models. Transactions of the Association for Computational Linguistics, 9:362–373.
Cite (Informal):
Adaptive Semiparametric Language Models (Yogatama et al., TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.22.pdf