Leveraging Similar Users for Personalized Language Modeling with Limited Data

Charles Welch, Chenxi Gu, Jonathan K. Kummerfeld, Veronica Perez-Rosas, Rada Mihalcea


Abstract
Personalized language models are designed and trained to capture language patterns specific to individual users. This makes them more accurate at predicting what a user will write. However, when a new user joins a platform and not enough text is available, it is harder to build effective personalized language models. We propose a solution for this problem, using a model trained on users that are similar to a new user. In this paper, we explore strategies for finding the similarity between new users and existing ones and methods for using the data from existing users who are a good match. We further explore the trade-off between available data for new users and how well their language can be modeled.
Anthology ID:
2022.acl-long.122
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1742–1752
Language:
URL:
https://aclanthology.org/2022.acl-long.122
DOI:
10.18653/v1/2022.acl-long.122
Bibkey:
Cite (ACL):
Charles Welch, Chenxi Gu, Jonathan K. Kummerfeld, Veronica Perez-Rosas, and Rada Mihalcea. 2022. Leveraging Similar Users for Personalized Language Modeling with Limited Data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1742–1752, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Leveraging Similar Users for Personalized Language Modeling with Limited Data (Welch et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.122.pdf
Video:
 https://aclanthology.org/2022.acl-long.122.mp4