Exploring transformers and time lag features for predicting changes in mood over time

John Culnan, Damian Romero Diaz, Steven Bethard


Abstract
This paper presents transformer-based models created for the CLPsych 2022 shared task. Using posts from Reddit users over a period of time, we aim to predict changes in mood from post to post. We test models that preserve timeline information through explicit ordering of posts as well as those that do not order posts but preserve features on the length of time between a user’s posts. We find that a model with temporal information may provide slight benefits over the same model without such information, although a RoBERTa transformer model provides enough information to make similar predictions without custom-encoded time information.
Anthology ID:
2022.clpsych-1.21
Volume:
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
Month:
July
Year:
2022
Address:
Seattle, USA
Editors:
Ayah Zirikly, Dana Atzil-Slonim, Maria Liakata, Steven Bedrick, Bart Desmet, Molly Ireland, Andrew Lee, Sean MacAvaney, Matthew Purver, Rebecca Resnik, Andrew Yates
Venue:
CLPsych
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
226–231
Language:
URL:
https://aclanthology.org/2022.clpsych-1.21
DOI:
10.18653/v1/2022.clpsych-1.21
Bibkey:
Cite (ACL):
John Culnan, Damian Romero Diaz, and Steven Bethard. 2022. Exploring transformers and time lag features for predicting changes in mood over time. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 226–231, Seattle, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring transformers and time lag features for predicting changes in mood over time (Culnan et al., CLPsych 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.clpsych-1.21.pdf
Video:
 https://aclanthology.org/2022.clpsych-1.21.mp4