What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text

Kathleen Fraser, Svetlana Kiritchenko, Isar Nejadgholi, Anna Kerkhof


Abstract
When harmful social stereotypes are expressed on a public platform, they must be addressed in a way that educates and informs both the original poster and other readers, without causing offence or perpetuating new stereotypes. In this paper, we synthesize findings from psychology and computer science to propose a set of potential counter-stereotype strategies. We then automatically generate such counter-stereotypes using ChatGPT, and analyze their correctness and expected effectiveness at reducing stereotypical associations. We identify the strategies of denouncing stereotypes, warning of consequences, and using an empathetic tone as three promising strategies to be further tested.
Anthology ID:
2023.sicon-1.4
Volume:
Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Kushal Chawla, Weiyan Shi
Venue:
SICon
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–38
Language:
URL:
https://aclanthology.org/2023.sicon-1.4
DOI:
10.18653/v1/2023.sicon-1.4
Bibkey:
Cite (ACL):
Kathleen Fraser, Svetlana Kiritchenko, Isar Nejadgholi, and Anna Kerkhof. 2023. What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023), pages 25–38, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text (Fraser et al., SICon 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sicon-1.4.pdf
Video:
 https://aclanthology.org/2023.sicon-1.4.mp4