MNLP@Multilingual Counterspeech Generation: Evaluating Translation and Background Knowledge Filtering

Emanuele Moscato, Arianna Muti, Debora Nozza


Abstract
We describe our participation in the Multilingual Counterspeech Generation shared task, which aims to generate a counternarrative to counteract hate speech, given a hateful sentence and relevant background knowledge. Our team tested two different aspects: translating outputs from English vs generating outputs in the original languages and filtering pieces of the background knowledge provided vs including all the background knowledge. Our experiments show that filtering the background knowledge in the same prompt and leaving data in the original languages leads to more adherent counternarrative generations, except for Basque, where translating the output from English and filtering the background knowledge in a separate prompt yields better results. Our system ranked first in English, Italian, and Spanish and fourth in Basque.
Anthology ID:
2025.mcg-1.7
Volume:
Proceedings of the First Workshop on Multilingual Counterspeech Generation
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Helena Bonaldi, María Estrella Vallecillo-Rodríguez, Irune Zubiaga, Arturo Montejo-Ráez, Aitor Soroa, María Teresa Martín-Valdivia, Marco Guerini, Rodrigo Agerri
Venues:
MCG | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
56–64
Language:
URL:
https://aclanthology.org/2025.mcg-1.7/
DOI:
Bibkey:
Cite (ACL):
Emanuele Moscato, Arianna Muti, and Debora Nozza. 2025. MNLP@Multilingual Counterspeech Generation: Evaluating Translation and Background Knowledge Filtering. In Proceedings of the First Workshop on Multilingual Counterspeech Generation, pages 56–64, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
MNLP@Multilingual Counterspeech Generation: Evaluating Translation and Background Knowledge Filtering (Moscato et al., MCG 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.mcg-1.7.pdf