Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis

Carla Perez Almendros, Luis Espinosa Anke, Steven Schockaert


Abstract
Patronizing and Condescending Language (PCL) is a subtle but harmful type of discourse, yet the task of recognizing PCL remains under-studied by the NLP community. Recognizing PCL is challenging because of its subtle nature, because available datasets are limited in size, and because this task often relies on some form of commonsense knowledge. In this paper, we study to what extent PCL detection models can be improved by pre-training them on other, more established NLP tasks. We find that performance gains are indeed possible in this way, in particular when pre-training on tasks focusing on sentiment, harmful language and commonsense morality. In contrast, for tasks focusing on political speech and social justice, no or only very small improvements were witnessed. These findings improve our understanding of the nature of PCL.
Anthology ID:
2022.lrec-1.415
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
3902–3911
Language:
URL:
https://aclanthology.org/2022.lrec-1.415
DOI:
Bibkey:
Cite (ACL):
Carla Perez Almendros, Luis Espinosa Anke, and Steven Schockaert. 2022. Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3902–3911, Marseille, France. European Language Resources Association.
Cite (Informal):
Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis (Perez Almendros et al., LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.415.pdf
Data
ETHICSStereoSet