Can Contextualizing User Embeddings Improve Sarcasm and Hate Speech Detection?

Kim Breitwieser


Abstract
While implicit embeddings so far have been mostly concerned with creating an overall representation of the user, we evaluate a different approach. By only considering content directed at a specific topic, we create sub-user embeddings, and measure their usefulness on the tasks of sarcasm and hate speech detection. In doing so, we show that task-related topics can have a noticeable effect on model performance, especially when dealing with intended expressions like sarcasm, but less so for hate speech, which is usually labelled as such on the receiving end.
Anthology ID:
2022.nlpcss-1.14
Volume:
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
Month:
November
Year:
2022
Address:
Abu Dhabi, UAE
Editors:
David Bamman, Dirk Hovy, David Jurgens, Katherine Keith, Brendan O'Connor, Svitlana Volkova
Venue:
NLP+CSS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
126–139
Language:
URL:
https://aclanthology.org/2022.nlpcss-1.14
DOI:
10.18653/v1/2022.nlpcss-1.14
Bibkey:
Cite (ACL):
Kim Breitwieser. 2022. Can Contextualizing User Embeddings Improve Sarcasm and Hate Speech Detection?. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS), pages 126–139, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Can Contextualizing User Embeddings Improve Sarcasm and Hate Speech Detection? (Breitwieser, NLP+CSS 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.nlpcss-1.14.pdf
Dataset:
 2022.nlpcss-1.14.dataset.zip