Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both?

Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz, Dirk Hovy


Abstract
Pre-trained language models consider the context of neighboring words and documents but lack any author context of the human generating the text. However, language depends on the author’s states, traits, social, situational, and environmental attributes, collectively referred to as human context (Soni et al., 2024). Human-centered natural language processing requires incorporating human context into language models. Currently, two methods exist: pre-training with 1) group-wise attributes (e.g., over-45-year-olds) or 2) individual traits. Group attributes are simple but coarse — not all 45-year-olds write the same way — while individual traits allow for more personalized representations, but require more complex modeling and data. It is unclear which approach benefits what tasks. We compare pre-training models with human context via 1) group attributes, 2) individual users, and 3) a combined approach on five user- and document-level tasks. Our results show that there is no best approach, but that human-centered language modeling holds avenues for different methods.
Anthology ID:
2024.wassa-1.26
Volume:
Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Orphée De Clercq, Valentin Barriere, Jeremy Barnes, Roman Klinger, João Sedoc, Shabnam Tafreshi
Venues:
WASSA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
316–328
Language:
URL:
https://aclanthology.org/2024.wassa-1.26
DOI:
Bibkey:
Cite (ACL):
Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz, and Dirk Hovy. 2024. Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both?. In Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, pages 316–328, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both? (Soni et al., WASSA-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wassa-1.26.pdf