MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Matthew Matero, Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz


Abstract
Much of natural language processing is focused on leveraging large capacity language models, typically trained over single messages with a task of predicting one or more tokens. However, modeling human language at higher-levels of context (i.e., sequences of messages) is under-explored. In stance detection and other social media tasks where the goal is to predict an attribute of a message, we have contextual data that is loosely semantically connected by authorship. Here, we introduce Message-Level Transformer (MeLT) – a hierarchical message-encoder pre-trained over Twitter and applied to the task of stance prediction. We focus on stance prediction as a task benefiting from knowing the context of the message (i.e., the sequence of previous messages). The model is trained using a variant of masked-language modeling; where instead of predicting tokens, it seeks to generate an entire masked (aggregated) message vector via reconstruction loss. We find that applying this pre-trained masked message-level transformer to the downstream task of stance detection achieves F1 performance of 67%.
Anthology ID:
2021.findings-emnlp.253
Original:
2021.findings-emnlp.253v1
Version 2:
2021.findings-emnlp.253v2
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2959–2966
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.253
DOI:
10.18653/v1/2021.findings-emnlp.253
Bibkey:
Cite (ACL):
Matthew Matero, Nikita Soni, Niranjan Balasubramanian, and H. Andrew Schwartz. 2021. MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2959–2966, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection (Matero et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.253.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.253.mp4
Code
 matthewmatero/melt