Outcome-Constrained Large Language Models for Countering Hate Speech

Lingzi Hong, Pengcheng Luo, Eduardo Blanco, Xiaoying Song


Abstract
Automatic counterspeech generation methods have been developed to assist efforts in combating hate speech. Existing research focuses on generating counterspeech with linguistic attributes such as being polite, informative, and intent-driven. However, the real impact of counterspeech in online environments is seldom considered. This study aims to develop methods for generating counterspeech constrained by conversation outcomes and evaluate their effectiveness. We experiment with large language models (LLMs) to incorporate into the text generation process two desired conversation outcomes: low conversation incivility and non-hateful hater reentry. Specifically, we experiment with instruction prompts, LLM finetuning, and LLM reinforcement learning (RL). Evaluation results show that our methods effectively steer the generation of counterspeech toward the desired outcomes. Our analyses, however, show that there are differences in the quality and style depending on the model.
Anthology ID:
2024.emnlp-main.260
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4523–4536
Language:
URL:
https://aclanthology.org/2024.emnlp-main.260
DOI:
Bibkey:
Cite (ACL):
Lingzi Hong, Pengcheng Luo, Eduardo Blanco, and Xiaoying Song. 2024. Outcome-Constrained Large Language Models for Countering Hate Speech. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4523–4536, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Outcome-Constrained Large Language Models for Countering Hate Speech (Hong et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.260.pdf