Counterfactual Language Model Adaptation for Suggesting Phrases

Kenneth Arnold, Kai-Wei Chang, Adam Kalai


Abstract
Mobile devices use language models to suggest words and phrases for use in text entry. Traditional language models are based on contextual word frequency in a static corpus of text. However, certain types of phrases, when offered to writers as suggestions, may be systematically chosen more often than their frequency would predict. In this paper, we propose the task of generating suggestions that writers accept, a related but distinct task to making accurate predictions. Although this task is fundamentally interactive, we propose a counterfactual setting that permits offline training and evaluation. We find that even a simple language model can capture text characteristics that improve acceptability.
Anthology ID:
I17-2009
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
49–54
Language:
URL:
https://aclanthology.org/I17-2009
DOI:
Bibkey:
Cite (ACL):
Kenneth Arnold, Kai-Wei Chang, and Adam Kalai. 2017. Counterfactual Language Model Adaptation for Suggesting Phrases. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 49–54, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Counterfactual Language Model Adaptation for Suggesting Phrases (Arnold et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-2009.pdf
Note:
 I17-2009.Notes.pdf
Code
 kcarnold/counterfactual-lm