Measuring the Language of Self-Disclosure across Corpora

Ann-Katrin Reuel, Sebastian Peralta, João Sedoc, Garrick Sherman, Lyle Ungar


Abstract
Being able to reliably estimate self-disclosure – a key component of friendship and intimacy – from language is important for many psychology studies. We build single-task models on five self-disclosure corpora, but find that these models generalize poorly; the within-domain accuracy of predicted message-level self-disclosure of the best-performing model (mean Pearson’s r=0.69) is much higher than the respective across data set accuracy (mean Pearson’s r=0.32), due to both variations in the corpora (e.g., medical vs. general topics) and labeling instructions (target variables: self-disclosure, emotional disclosure, intimacy). However, some lexical features, such as expression of negative emotions and use of first person personal pronouns such as ‘I’ reliably predict self-disclosure across corpora. We develop a multi-task model that yields better results, with an average Pearson’s r of 0.37 for out-of-corpora prediction.
Anthology ID:
2022.findings-acl.83
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1035–1047
Language:
URL:
https://aclanthology.org/2022.findings-acl.83
DOI:
10.18653/v1/2022.findings-acl.83
Bibkey:
Cite (ACL):
Ann-Katrin Reuel, Sebastian Peralta, João Sedoc, Garrick Sherman, and Lyle Ungar. 2022. Measuring the Language of Self-Disclosure across Corpora. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1035–1047, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Measuring the Language of Self-Disclosure across Corpora (Reuel et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.83.pdf